![]() method and apparatus for dynamic bit allocation to encode audio signals, and computer readable mediu
专利摘要:
SYSTEMS, METHODS, APPARATUS AND LEGIBLE MEDIA OR COMPUTER FOR DYNAMIC ALLOCATION OF BITS. A dynamic bit allocation operation determines a bit allocation for each of a plurality of vectors, based on a corresponding plurality of gain factors, and compares each allocation with a threshold value that is based on the dimensionality of the vector. 公开号:BR112013002166B1 申请号:R112013002166-7 申请日:2011-07-29 公开日:2021-02-02 发明作者:Venkatesh Krishnan;Vivek Rajendran;Ethan R. Duni 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
Field of the Invention [001] This invention relates to the field of audio signal processing. Description of the Prior Art [002] Encoding schemes based on the modified discrete cosine transform (MDCT) are typically used to encode generalized audio signals, which may include speech and / or non-speech content, such as music. Examples of existing audio codecs using MDCT encoding include MPEG-1 Audio Layer 3 (MP3), Dolby Digital (Dolby Labs., London, UK; also called AC-3 and standardized as ATSC A / 52), Vorbis (Xiph .Org Foundation, Somerville, MA), Windows Media Audio (WMA, Microsoft Corp., Redmond, WA), Adaptive Transformation Acoustic Coding (ATRAC, Sony Corp., Tokyo, JP) and Advanced Audio Coding (AAC, as standardized more recently in ISO / IEC 14496-3: 2009). MDCT coding is also a component of some telecommunication standards, such as, for example, Enhanced Variable Rate Codec (EVRC, as standardized in document C.S0014-D v2.0, January 25, 2010, of the 2nd Partner Project of 3rd Generation). The G.718 codec ("Built-in variable bit rate speech and audio coding of 8-32 kbit / s in broadband and robust narrowband with frame errors", Telecommunications Standardization Sector (ITU-T), Geneva , CH, June 2008, corrected in November 2008 and August 2009, amended in March 2009 and March 2010) is an example of a multilayered codec that uses MDCT encoding. Summary of the Invention [003] A bit allocation method according to a general configuration includes, for each of a plurality of vectors, calculating a corresponding gain factor from a plurality of gain factors. This method also includes, for each of the plurality of vectors, calculating a corresponding bit allocation that is based on the gain factor. This method also includes, for at least one of the plurality of vectors, determining that the corresponding bit allocation is not greater than a minimum allocation value. This method also includes changing the corresponding bit allocation, in response to the determination, for each of at least one vector. Computer-readable storage media (eg, non-transitory media) having tangible characteristics that cause a machine that reads the characteristics to perform such a method are described. [004] A device for allocating bits according to a general configuration includes mechanisms for calculating, for each of a plurality of vectors, a gain factor corresponding to a plurality of gain factors, and mechanisms for calculating, for each one. among a plurality of vectors, a corresponding bit allocation that is based on the gain factor. This apparatus also includes mechanisms for determining, for at least one of the plurality of vectors, that the corresponding bit allocation is not greater than a minimum allocation value and mechanisms for changing the corresponding bit allocation, in response to the determination, for each one of at least one vector. [005] An apparatus for allocating bits according to another general configuration includes a gain factor calculator configured to calculate, for each of a plurality of vectors, a corresponding gain factor of a plurality of gain factors, and a bit allocation calculator configured to calculate, for each of the plurality of vectors, a corresponding bit allocation that is based on the gain factor. This apparatus also includes a comparator configured to determine, for at least one of the plurality of vectors, that the corresponding bit allocation is not greater than a minimum allocation value, and an allocation adjustment module configured to change the bit allocation corresponding, in response to the determination, for each of at least one vector. Brief Description of Drawings [006] Figure 1A - shows a flow chart for an M100 method according to a general configuration. [007] Figure 1B - shows a flowchart for a T210 implementation of task T200. [008] Figure 1C - shows a flowchart for a T220 implementation of the T210 task. [009] Figure 1D - shows a flow chart for a T230 implementation of the T220 task. [0010] Figure 2 - shows an example of sub-bands selected in a low-band audio signal. [0011] Figure 3 - shows an example of selected sub-bands and residual components in a high-band audio signal. [0012] Figure 4A - shows an example of a relationship between subband locations in a reference frame and a target frame. [0013] Figure 4B - shows a flow chart for a T240 implementation of the T230 task. [0014] Figures 5A-5D - show examples of structures of quantization of gain-shape vectors. [0015] Figure 6A - shows a flow chart for a T250 implementation of the T230 task. [0016] Figure 6B - shows a flowchart for a T255 implementation of the T250 task. [0017] Figure 7A - shows a flow chart of a T260 implementation of the T250 task. [0018] Figure 7B - shows a flowchart for a T265 implementation of the T260 dynamic allocation task. [0019] Figure 8A - shows a flow chart of a TA270 implementation of the T230 dynamic bit allocation task. [0020] Figure 8B - shows a block diagram of a T280 implementation of the T220 dynamic bit allocation task. [0021] Figure 8C - shows a flow chart of an M110 implementation of the M100 method. [0022] Figure 9 - shows an example of pulse coding. [0023] Figure 10A - shows a block diagram of a T290 implementation of the T280 task. [0024] Figure 10B - shows a flowchart for a T295 implementation of the T290 dynamic allocation task. [0025] Figure 11A - shows a flow chart for a T225 implementation of the T220 dynamic allocation task. [0026] Figure 11B - shows an example of a subset of a set of ordered spectral coefficients. [0027] Figure 12A - shows a block diagram of a device for allocating MF100 bits according to a general configuration. [0028] Figure 12B - shows a block diagram of a device for allocating A100 bits according to a general configuration. [0029] Figure 13A - shows a block diagram of an E100 encoder according to a general configuration. Figure 13D shows a block diagram of a corresponding D100 decoder. [0030] Figure 13B - shows a block diagram of an E110 implementation of the E100 encoder. Figure 13E shows a block diagram of a corresponding D110 implementation of the D100 decoder. [0031] Figure 13C - shows a block diagram of an E120 implementation of the E110 encoder. Figure 13F - shows a block diagram of a corresponding D120 implementation of the D100 decoder. [0032] Figures 14A-E - show a range of applications for the E100 encoder. [0033] Figure 15A - shows a block diagram of an MZ100 method of signal classification. [0034] Figure 15B - shows a block diagram of a D10 communication device. [0035] Figure 16 - shows front, rear and side views of an H100 telephone set. [0036] Figure 17 - shows a block diagram of an example of a multiple band encoder. [0037] Figure 18 - shows a flowchart of an example of a method for encoding multiple bands. [0038] Figure 19 - shows a block diagram of an E200 encoder. [0039] Figure 20 - shows an example of a rotation matrix. Detailed Description of the Invention [0040] It may be desirable to use a dynamic bit allocation scheme that is based on encoded gain parameters that are known to both the encoder and the decoder, so that the scheme can be executed without explicitly transmitting collateral information from the encoder to the decoder. [0041] Unless expressly limited by its context, the term "sign" is used here to indicate any of its customary meanings, including the state of a memory location (or set of memory locations) expressed in a medium. wired, bus or other transmission. Unless expressly limited by its context, the term “generate” is used here to indicate any of its customary meanings, such as computing or otherwise producing. Unless expressly limited by its context, the term "calculate" is used here to indicate any of its customary meanings, such as computing, evaluating, smoothing, and / or selecting from a plurality of values. Unless expressly limited by its context, the term “get” is used to indicate any of its customary meanings, such as calculating, deriving, receiving (for example, from an external device) and / or recovering (for example, from an arrangement of storage elements). Unless expressly limited by its context, the term "select" is used to indicate any of its customary meanings, such as identifying, indicating, applying and / or using at least one, and less than all, of a set of two or more. Where the term "understand" is used in this description and in the claims, it does not exclude other elements or operations. The term “based on (a) in” (as in “A is based on B”) is used to indicate any of its customary meanings, including the cases (i) “derived (a) from” (for example, “B is a precursor to A ”), (ii)“ based on (a) at least ”(for example,“ A is based on at least B ”) and, if appropriate in the specific context, (iii)“ equals ”( for example, “A is equal to B”). Similarly, the term "in response to" is used to indicate any of its customary meanings, including "in response to at least". [0042] Unless otherwise indicated, the term "series" is used to indicate a sequence of two or more items. The term “logarithm” is used to indicate the base ten logarithm, although extensions of such an operation to other bases are within the scope of this description. The term “frequency component” is used to indicate one of a set of frequencies or frequency bands of a signal, such as a sample of a representation of the signal in the frequency domain (for example, produced by a fast Fourier transform) or a sub-band of the signal (for example, a sub-band on a Bark scale or on a mobile scale). [0043] Unless otherwise stated, any description of an operation of a device that has a specific characteristic is also expressly intended to describe a method that has a similar characteristic (and vice versa), and any description of an operation of a device according to a specific configuration is also expressly intended to describe a method according to a similar configuration (and vice versa). The term "configuration" can be used with reference to a method, apparatus, and / or system indicated by its specific context. The terms "method", "process", "procedure" and "technique" are used in a generic and interchangeable way, unless otherwise indicated by the specific context. A “task” that has multiple sub-tasks is also a method. The terms "apparatus" and "device" are also used in a generic and interchangeable manner, unless otherwise indicated by the specific context. The terms "element" and "module" are typically used to indicate part of a larger configuration. Unless expressly limited by its context, the term "system" is used here to indicate any of its customary meanings, including "a group of elements that interact to serve a common purpose". [0044] The systems, methods and apparatus described herein are generally applicable to representations of audio signal encoding in the frequency domain. A typical example of such a representation is a series of transform coefficients in the transform domain. Examples of suitable transforms include discrete orthogonal transforms, such as sinusoidal unit transforms. Examples of suitable sinusoidal unit transforms include discrete trigonometric transforms, which include, without limitation, discrete cosine transforms (DCTs), discrete sine transforms (DSTs), and discrete Fourier transform (DFT). Other examples of suitable transforms include overlapping versions of such transforms. A specific example of a suitable transform is the modified DCT (MDCT) introduced above. [0045] Reference is made throughout this description to a “low band” and a “high band” (equivalently, “upper band”) of an audio frequency range, and to the specific example of a low band of zero at four kilohertz (kHz) and a high band from 3.5 to seven kHz. It is expressly noted that the principles discussed here are not limited to this specific example in any way, unless such limit is explicitly stated. Other examples (again without limitation) of frequency bands for which the application of these principles of coding, decoding, allocation, quantization and / or other processing is expressly contemplated and described here include a low band that has a lower limit of any between 0, 25, 50, 100, 150 and 200 Hz and an upper limit of any of 3000, 3500, 4000 and 4500 Hz, and a high band having a lower limit of any of 3000, 3500, 4000, 4500, and 5000 Hz and an upper limit of any of 6000, 6500, 7000, 7500, 8000, 8500 and 9000 Hz. The application of such principles (again without limitation) to a high band that has a lower limit of any of 3000, 3500 , 4000, 4500, 5000, 5500, 6000, 6500, 7000, 7500, 8000, 8500 and 9000 Hz and an upper limit of any of 10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15, 15.5 and 16 kHz is also expressly contemplated and described by him. It is also explicitly noted that, although a high bandwidth signal is typically converted to a lower sampling rate at an earlier stage in the coding process (for example, by means of resampling and / or decimation), it does maintain a high-band signal and the information that this port continues to represent the high-band audio frequency range. [0046] An encoding scheme that includes dynamic bit allocation described here can be applied to encode any audio signal (for example, including speech). Alternatively, it may be desirable to use such an encoding scheme only for non-speech audio (for example, music). In such a case, the encoding scheme can be used with a classification scheme to determine the content type of each frame of the audio signal and select an appropriate encoding scheme. [0047] An encoding scheme that includes the dynamic bit allocation described here can be used as a primary codec or a layer or stage in a multilayer or multiple stage codec. In one example, such an encoding scheme is used to encode a portion of the frequency content of an audio signal (for example, a low band or a high band), and another encoding scheme is used to encode another portion of the audio content. signal frequency. In another example, such an encoding scheme is used to encode a residual (i.e., an error between the original and encoded signals) from another encoding layer. [0048] A low bit rate encoding of audio signals often requires an optimal use of the available bits to encode the contents of the audio signal frame. The contents of the audio signal frames may consist of the PCM (pulse code modulation) samples of the signal or a representation of the signal in the transform domain. The encoding of each frame typically includes dividing the frame into a plurality of sub-bands (i.e., dividing the frame as a vector into a plurality of sub-vectors), assigning a bit allocation to each sub-vector, and encoding each sub-vector in the corresponding allocated number of bits. It may be desirable, in a typical audio coding application, for example, to perform vector quantization on a large number (for example, ten, twenty, thirty or forty) of different subband vectors for each frame. Examples of frame size include (without limitation) 100, 120, 140, 160 and 180 values (for example, transform coefficients), and examples of subband length include (without limitation) five, six, seven, eight, nine, ten, eleven, twelve and sixteen. [0049] A bit allocation approach is to divide a total bit allocation evenly among the sub-vectors. For example, the number of bits allocated to each sub-vector can be set from frame to frame. In this case, the decoder can already be configured with the knowledge of the bit allocation scheme, so that there is no need for the encoder to transmit this information. However, the goal of optimal bit utilization may be to ensure that various components of the audio signal frame are encoded with a number of bits that is related to (for example, proportional) their perceptual significance. Some of the input subband vectors may be less significant (for example, they may capture little energy), so that a better result can be obtained by allocating a few bits to encode these vectors and more bits to encode the subband vectors more important. [0050] Since a fixed allocation scheme does not account for variations in the relative perceptual significance of the sub-vectors, it may be desirable to use, instead, a dynamic allocation scheme, so that the number of bits allocated to each sub-vector may vary from frame to frame. In this case, the information regarding the specific bit allocation scheme used for each frame is provided to the decoder so that the frame can be decoded. [0051] Most audio encoders typically provide such bit allocation information to the decoder as collateral information. Audio coding algorithms such as AAC, for example, typically use collateral information or entropy coding schemes, such as, for example, Huffman coding, to transmit the bit allocation information. The use of information only to transmit the allocation of bits is ineffective, since this collateral information is not used directly in the signal encoding. While variable length codewords like Huffman coding or arithmetic coding can provide some advantage, long codewords can be found that can reduce coding effectiveness. [0052] It may be desirable, instead, to use a dynamic bit allocation scheme that is based on encoded gain parameters that are known to both the encoder and the decoder, so that the scheme can be executed without the explicit transmission of collateral information from the encoder to the decoder. Such effectiveness can be especially important for low bit rate applications, such as cellular telephony. In one example, such dynamic bit allocation can be implemented without collateral information by allocating bits to quantize vectors in accordance with the associated earnings values. [0053] Figure 1A shows a flow chart of an M100 method according to a general configuration that includes a T100 division task and a T200 bit allocation task. Task T100 receives a vector that will be encoded (for example, a plurality of coefficients of a frame in the transform domain) and divides it into a set of sub-vectors. Sub-vectors can, but, need not overlap and can even be separated from each other (in the specific examples described here, sub-vectors do not overlap). This division can be predetermined (for example, independent of the content of the vector), so that each input vector is divided in the same way. An example of predetermined division divides each input vector of 100 elements into three sub-vectors of respective lengths (25, 35, 40). Another example of predetermined division divides an input vector of 140 elements into a set of twenty seven sub-vectors of length seven. Another example of predetermined division divides an input vector of 280 elements into a set of forty sub-vectors of length seven. [0054] Alternatively, this division can be variable, so that the input vectors are divided differently from one frame to the next (for example, according to some perceptual criteria). It may be desirable, for example, to perform effective encoding of an audio signal in the transform domain by detecting and encoding the target of the signal's harmonic components. Figure 2 shows a graph of magnitude VERSUS frequency in which eight selected sub-bands of length seven that correspond to peaks harmonically apart from a low-band linear prediction encoding signal (LPC) are indicated by bars near the geometric axis of frequency. Figure 3 shows a similar example for a high-band LPC residual signal that indicates the residual components arranged between and outside the selected sub-bands. In such a case, it may be desirable to make a dynamic allocation between the set of sub-bands and the total residual, to make a dynamic allocation between the set of sub-bands and / or to make a dynamic allocation between the residual components. An additional description of harmonic modeling and harmonic coding can be found in the orders listed above, for which this order claims priority. [0055] Another example of a variable split scheme identifies a set of perceptually important sub-bands in the current frame (also called a target frame) based on the locations of perceptually important sub-bands in a coded version of another frame (also called a reference frame), which may be the previous frame. Figure 4A shows an example of subband selection operation in such an encoding scheme (also called dependent mode encoding). An additional description of dependent coding can be found in the orders listed above, for which this order claims priority. [0056] Another example of residual signal is obtained by encoding a set of selected sub-bands and subtracting the encoded set from the original signal. In this case, it may be desirable to divide the resulting residual into a set of sub-vectors (for example, according to a predetermined division) and make a dynamic allocation between the sub-vectors. [0057] The selected sub-bands can be encoded using a vector quantization scheme (for example, a gain-form vector quantization scheme), and the residual signal can be encoded using a factorial pulse encoding scheme ( FPC) or a combinatorial pulse coding scheme. [0058] From a total number of bits to be allocated among the plurality of vectors, task T200 assigns a bit allocation to each of the several vectors. This allocation can be dynamic, so that the number of bits allocated to each vector can change from frame to frame. [0059] The M100 method can be arranged to pass the bit allocations produced by task T200 to an operation that encodes the sub-vectors for storage or transmission. One type of such operation is a vector quantization scheme (VQ), which encodes a vector by associating it with an entry in each of one or more sets of codebooks (which are also known to the decoder) and using the index or indexes of these entries to represent the vector. The length of a codebook index, which determines the maximum number of entries in the codebook, can be any arbitrary integer that is considered suitable for the application. An implementation of the M100 method as performed on a decoder can be arranged to pass the bit allocations produced by task T200 to an operation that decodes the sub-vectors for reproducing an encoded audio signal. [0060] For the case where two or more of the plurality of vectors have different extensions, task T200 can be implemented to calculate the allocation of bits for each vector m (where m = 1, 2, ..., M) with based on the number of dimensions (that is, the extent) of the vector. In this case, task T200 can be configured to calculate the allocation of bits Bm for each vector m as B x (DM / DH), where B is the total number of bits to be allocated, Dm is the dimension of the vector m, and DH is the sum of the dimensions of all vectors. In some cases, the T100 task can be implemented to determine the dimensions of the vectors by determining a location for each of a set of sub-bands, based on a set of modeling parameters. For harmonic mode coding, the modeling parameters can include a fundamental frequency F0 (within the current frame or within another band of the frame) and a harmonic spacing d between adjacent subband peaks. The parameters for a harmonic model can also include a corresponding phase fluctuation value for each of one or more of the subbands. For dependent mode coding, the modeling parameters may include a phase fluctuation value, with respect to the location of a corresponding significant band from a previous coded frame, for each of one or more sub-bands. The locations and dimensions of the residual components of the frame can then be determined based on the locations of the sub-bands. Residual components, which may include parts of the spectrum that are between and / or outside the sub-bands, can also be concatenated into one or more larger vectors. [0061] Figure 1B shows a flowchart of a T210 implementation of the dynamic bit allocation task T200 that includes sub-tasks TA200 and TA300. The TA200 task calculates bit allocations for the vectors, and the TA300 task compares the allocations with a minimum allocation value. The TA300 task can be implemented to compare each allocation with the same minimum allocation value. Alternatively, task TA300 can be implemented to compare each allocation to a minimum allocation value that can be different for two or more among the plurality of vectors. [0062] The TA300 task can be implemented to increase the allocation of bits that is less than the minimum allocation value (for example, when changing the allocation to the minimum allocation value). Alternatively, task TA300 can be implemented to reduce a bit allocation that is less (alternatively, not greater than) the minimum allocation value to zero. [0063] Figure 1C shows a flowchart of a T220 implementation of the dynamic bit allocation task T200 that includes the sub-task TA100 and a TA210 implementation of the allocation task TA200. The TA100 task calculates a corresponding gain factor for each of the plurality of vectors, and the TA210 task calculates a bit allocation for each vector based on the corresponding gain factor. It is typically desirable for the encoder to calculate bit allocations using the same gain factors as the decoder. For example, it may be desirable that the task of calculating the gain factor TA100 performed on the decoder produces the same result as the task TA100 performed on the encoder. Consequently, it may be desirable for the TA100 task performed on the encoder to include the quantization of the gain factors. [0064] The quantization of gain-shape vectors is an encoding technique that can be used to effectively encode signal vectors (for example, representing sound or image data) by decoupling the energy of the vector (s) ( es), which is represented by a gain factor, of the direction of the vector (s), which is represented by a shape. Such a technique may be especially suitable for applications in which the dynamic range of the signal may be large, such as the encoding of audio signals, such as speech and / or music. [0065] A gain-form vector quantizer (GSVQ) encodes the shape and gain of an input vector x separately. Figure 5A shows an example of a gain-form vector quantization operation. In this example, the SQ100 shape quantizer is configured to perform a vector quantization scheme (VQ) by selecting the quantized shape vector S from a codebook as the closest vector in the codebook of the input vector x (for example, in the sense of a mean square error) and when transmitting the index of the vector S 'in the codebook. In another example, the SQ100 shape quantizer is configured to perform a pulse coding quantization scheme when selecting a unit pulse unit standard pattern that is closest to the input vector x (for example, closest in the sense of a mean squared error) and when transmitting a codebook index for this standard. The NC10 standard calculator is configured to calculate the standard | x | of the input vector x, and the GQ10 gain quantizer is configured to quantize the standard to produce a quantized gain factor. The GQ10 gain quantizer can be configured to quantize the standard as a scalar or to combine the standard with other gains (for example, standards from others of the plurality of vectors) in a gain vector for vector quantization. [0066] The SQ100 form quantizer is typically implemented as a vector quantizer with the restriction that the codebook vectors have a unitary norm (that is, they are all points in the unitary hyper-sphere). This constraint simplifies the codebook search (for example, from an average square error calculation for an internal product operation). For example, the SQ100 form quantizer can be configured to select the vector S 'from a code book of K unit standard SK vectors, k = 0.1, ..., K - 1, according to such an operation as arg maxK (XTSK). Such a search can be exhaustive or optimized. For example, vectors can be arranged within the codebook to support a specific search strategy. [0067] In some cases, it may be desirable to restrict the input of the quantizer in a SQ100 form so as to be of a unitary standard (for example, to allow a specific codebook search strategy). Figure 5B shows such an example of a vector-gain vector quantization operation. In this example, the NL10 normalizer is configured to normalize the input vector x to produce the standard 11 xII and a unit standard shape vector S = x / II x II, and the shape quantizer SQ100 is arranged to receive the vector of shape S as its input. In such a case, the SQ100 form quantizer can be configured to select the vector S from a code book of K unit standard vectors Sk, k = 0.1, ..., K - 1, according to an operation such as arg maxK (STSK). [0068] Alternatively, the SQ100 shape quantizer can be configured to select the vector S from a unit pulse pattern code book. In this case, the quantizer SQ100 can be configured to select the pattern that, when normalized, is closest to the S-shape vector (for example, closest to the mean square error). Such a pattern is typically encoded as a codebook index that indicates the number of pulses and the signal for each position occupied in the pattern. The pattern selection can include scaling the input vector and associating it with the pattern, and the quantized vector S and generated by normalizing the selected pattern. Examples of pulse encoding schemes that can be performed by the SQ100 quantizer to encode such patterns include factorial pulse encoding and combinatorial pulse encoding. [0069] The GQ10 gain quantizer can be configured to perform scalar quantization of the gain or to combine the gain with other gains in a gain vector for vector quantization. In the example of Figures 5A and 5B, the gain quantizer GQ10 is arranged to receive and quantize the gain of the input vector x as the norm | x | (also called “open loop gain”). In other cases, the gain is based on the correlation of the vector of quantized form S with the original form. Such a gain is called a “closed loop gain”. Figure 5C shows an example of such a gain-shape vector quantization operation, which includes an internal product calculator IP10 and an SQ110 implementation of the shape quantizer SQ100, which also produces the vector of quantized shape S. The calculator IP10 is arranged to calculate the internal product of the vector in quantized form S and the original input vector (for example, STX), and the gain quantizer GQ10 is arranged to receive and quantize this product as the closed loop gain. As the SQ110 form quantizer produces a poorly quantized result, the closed loop gain will be lower. As the shape quantizer accurately quantizes the shape, the closed loop gain will be higher. When the shape quantization is perfect, the closed loop gain is equal to the open loop gain. Figure 5D shows an example of a similar gain-shape vector quantization operation, which includes an NL20 normalizer configured to normalize the input vector x to produce a unit norm shape vector S = x / | x | as input to the SQ110 form quantizer. [0070] In the sense of source coding, the closed loop gain can be considered to be more optimal, since it takes into account the quantization error in a specific way, differently from the open loop gain. However, it may be desirable to perform upstream processing based on this gain value. Specifically, it may be desirable to use this gain factor to decide how to quantize the shape (for example, to dynamically allocate bits between shapes). Such dependence on the gain shape coding operation may make it desirable to use an open loop gain calculation (for example, to avoid collateral information). In this case, since the gain controls the allocation of bits, the shape quantization depends explicitly on the gain in both the encoder and the decoder, so that a shape-independent open-loop gain calculation is used. An additional description of the quantization of gain-shape vectors, which includes multistage shape quantization structures that can be used in conjunction with a dynamic allocation scheme, as described here, can be found in the orders listed above, for which this request claims priority. [0071] It may be desirable to combine a predictive gain encoding structure (for example, a differential pulse code modulation scheme) with a transform structure for gain encoding. In one example, a vector of subband gains in a plane (for example, a vector of the gain factors of the plurality of vectors) is introduced in the transform encoder to obtain the mean and differential components, with the predictive encoding operation running only on the middle component (for example, from frame to frame). In one example, each element m of the input gain vector of length M is calculated according to an expression such as 10 logiol I xm 112, where xm denotes the corresponding subband vector. It may be desirable to use such a method in conjunction with the T210 dynamic allocation task described here. Since the average component does not affect the dynamic allocation between the vectors, the differential components (which are coded without dependence on the past) can be used as the gain factors in an implementation of the T210 dynamic allocation task to obtain an operation that is resistant to the failure of the predictive encoding operation (for example, resulting from a deletion of the previous frame). Figure 20 shows an example of a rotation matrix (where S is the column vector [1 1 1 ... 1] T / sqrt (M)) that can be applied by the transform encoder to the factor length vector M of gain in order to obtain a rotated vector that has an average component in the first element and corresponding differential components in the other elements. In this case, the differential component for the element occupied by the average component can be reconstructed from the average component and the other differential components. [0072] Task TA210 can be configured to calculate a BM bit allocation for each vector m so that the allocation is based on the number of DM dimensions and the EM energy of the vector (for example, the energy per dimension of the vector). In an example, the allocation of Bm bits for each vector m is initialized to the value B x (DM / DH) + alog2 (Em / DM) - BFZ, where Fz is calculated as the sum ∑ [(DM / DH) x log2 (EM / DM) of all vectors m. Exemplary values for each of factors a and b include 0.5. For the case in which vectors m are unit norm vectors (for example, shape vectors), the EM energy of each vector in task TA210 is the corresponding gain factor. [0073] Figure 1D shows a flowchart for a T230 implementation of the dynamic allocation task T200 that includes a TA310 implementation of the TA300 comparison task. Task TA310 compares the current allocation for each vector m with a limit Tm, which is based on the number of dimensions Dm of the vector. For each vector m, the limit Tm is calculated as a monotonous non-decreasing function of the corresponding number of dimensions Dm. The limit Tm can be calculated, for example, as the minimum of Dm and a value of V. In one example, the value of Dm ranges from five to thirty-two, and the value of V is twelve. In this case, a vector of five dimensions will fail in the comparison if its current allocation is less than five bits, while a vector of twenty-four dimensions will be approved by the comparison, provided that its current allocation is at least twelve bits. [0074] The T230 task can be configured so that the allocations for vectors that are not approved by the comparison in the TA310 task are reset to zero. In this case, the bits that were previously allocated to these vectors can be used to increase allocations for one or more other vectors. Figure 4B shows a flowchart for a T240 implementation of the T230 task, which includes a TA400 sub-task, which performs such distribution (for example, by repeating the TA210 task, according to a revised number of bits available for allocation, for the vectors whose allocations are still subject to change). [0075] It is observed in particular that, although task TA210 can be implemented to perform a dynamic allocation based on perceptual criteria (for example, energy by dimension), the corresponding implementation of the M100 method can be configured to produce a result that it depends only on the input gain values and the vector dimensions. Consequently, a decoder that is aware of the same gain values and the quantized vector dimensions can execute the M100 method in order to obtain the same bit allocations without the need for a corresponding encoder to transmit any collateral information. [0076] It may be desirable to configure the dynamic bit allocation task T200 to impose a maximum value on the bit allocations calculated by the TA200 task (for example, the TA210 task). Figure 6A shows a flowchart of such a T250 implementation of task T230 that includes a TA305 implementation of sub-task TA300, which compares the bit allocations calculated in task TA210 with a maximum allocation value and / or with a minimum allocation value. Task TA305 can be implemented to compare each allocation with the same maximum allocation value. Alternatively, task TA305 can be implemented to compare each allocation with a maximum allocation value that can be different for two or more of the plurality of vectors. [0077] Task TA305 can be configured to correct an allocation that exceeds a maximum allocation value Bmax (also called the upper limit) by changing the bit allocation of the vector to the value Bmax and removing the vector from the active allocation (for preventing other changes in allocation for this vector). Alternatively or additionally, task TA305 can be configured to reduce a bit allocation that is less (alternatively, not greater than) a minimum Bmin allocation value (also called the lower limit) to zero, or to correct an allocation that is less than the Bmin value when changing the vector's bit allocation to the Bmin value and removing the vector from the active allocation (for example, by preventing further changes to the allocation for this vector). For vectors that will be encoded by pulse, it may be desirable to use the values of Bmin and / or Bmax that correspond to the whole number of pulses, or skip the task TA305 for such vectors. [0078] Task TA305 can be configured to iteratively correct the worst current over-allocations and / or sub-allocations until no limit violations remain. Task TA305 can be implemented to perform additional operations after correcting all limit violations: for example, updating the values of Dh and Fz, calculating a number of available Bav bits that account for corrective reallocations, and recalculating Bm allocations for vectors m currently in active allocation (for example, according to an expression such as DM x (Bav / Dh) + A log2 (EM / DM) - BFz). [0079] Figure 6B shows a flowchart for a T255 implementation of the T250 dynamic allocation task, which also includes an instance of the TA310 task. [0080] It may be desirable to configure the dynamic allocation task T200 to impose an integer restriction on each of the bit allocations. Figure 7A shows a flow chart of such T260 implementation of task T250, which includes an instance of task TA400 and sub-tasks TA500 and TA600. [0081] After the deallocated bits are distributed in task TA400, task TA500 imposes an integer restriction on bit allocations Bm by truncating each allocation Bm to the largest integer not greater than Bm. For vectors that will be encoded by pulse, it may be desirable to truncate the corresponding Bm allocation to the largest integer not greater than Bm that corresponds to an integer number of pulses. The TA500 task also updates the number of available Bav bits (for example, according to an expression such as B - ∑ “= 1Bm). The TA500 task can also be configured to store the truncated residue for each vector (for example, for later use in the TA600 task). In one example, task TA500 stores the truncated residue for each vector in a corresponding element of an ΔB error matrix. [0082] The TA600 task distributes any remaining bits to be allocated. In one example, if the number of Bav bits remaining is at least equal to the number of vectors currently in active allocation, the TA600 task increments the allocation for each vector, removing vectors whose allocations reach Bmax from the active allocation and updating Bav, until this condition no longer applies. If Bav is less than the number of vectors currently in active allocation, the TA600 task distributes the remaining bits to the vectors that have the largest truncated residuals of the TA500 task (for example, the vectors that correspond to the highest values in the ΔB error matrix ). For vectors that are encoded by pulse, it may be desirable to increase their allocations only to values that correspond to whole numbers of pulses. [0083] Figure 7B shows a flowchart for a T265 implementation of the T260 dynamic allocation task, which also includes an occurrence of the TA310 task. [0084] Figure 8A shows a flowchart of a TA270 implementation of the T230 dynamic bit allocation task, which includes a TA150 debugging sub-task. Task TA150 performs an initial debugging of an Sv set of vectors to be quantized (for example, shape vectors), based on the calculated gain factors. For example, task TA150 can be implemented to remove low energy errors, where the energy of a vector can be calculated as the open loop gain. The TA150 task can be configured, for example, to debug vectors whose energies are less (alternatively, not greater) than a limit value Ts. In a specific example, the value of Ts is 316. Task TA150 can also be configured to end task T270 if the average energy per vector is trivial (for example, not greater than 100). [0085] The TA150 task can be configured to calculate a maximum number of vectors to be debugged Pmax based on the total number of B bits to be allocated to the Sv set divided by the maximum number of Bmax bits to be allocated for any vector. In one example, task TA150 calculates Pmax by subtracting the ceiling (B / Bmax) from M, where M is the number of vectors in Sv. For the case in which too many vectors are debugged, task TA150 can be configured to cancel the debugging of the vector that has the maximum energy between the currently debugged vectors until no more than the maximum number of vectors is debugged. [0086] Figure 8B shows a block diagram of a T280 implementation of the T220 dynamic bit allocation task, which includes the TA150 debug task, the TA500 integer restriction task and the TA600 distribution task. It is observed in particular that the T280 task can be implemented to produce a result that depends only on the input gain values, so that the encoder and decoder can execute the T280 task on the same quantized gain values to obtain the same allocations. bits without transmitting any collateral information. It is also noted that task T280 can be implemented to include occurrences of tasks TA310 and / or TA400 described here, and that additionally or alternatively, task TA300 can be implemented as task TA305. The listing of pseudocodes in Appendix A describes a specific implementation of the T280 task. [0087] In order to support a dynamic allocation scheme, it may be desirable to implement the shape quantizer (and the corresponding quantizer) in order to select from codebooks of different sizes (that is, from codebooks that have lengths of different indexes) in response to the specific number of bits that are allocated for each form to be quantized. In one example, the SQ100 (or SQ110) shape quantizer can be implemented to use a codebook that has a shorter index length to encode the shape of a subband vector whose open loop gain is low, and to use a codebook that has a longer index length in order to encode the shape of a subband vector whose open loop gain is high. Such a dynamic allocation scheme can be configured to use a mapping between a vector gain and a codebook index length so that it is fixed or otherwise deterministic, so that the corresponding quantizer can apply the same scheme without any additional collateral information. [0088] Any type of vector encoding operation is a pulse encoding scheme (for example, factorial pulse encoding or combinatorial pulse encoding), which encodes a vector by associating it with a unit pulse pattern and using a index that identifies this pattern to represent the vector. Figure 9 shows an example in which a vector of thirty dimensions, whose value in each dimension is indicated by the solid line, is represented by the pulse pattern (0, 0, -1, -1, +1, +2, -1 , 0, 0, +1, -1, -1, + 1, - 1, +1, -1, -1, +2, -1, 0, 0, 0, 0, -1, +1, 0 , 0, 0, 0), as indicated by the dots. This pulse pattern can typically be represented by an index that is much less than thirty bits. It may be desirable to use a pulse coding scheme for quantizing general vectors (for example, a residue) and / or for shape quantization. [0089] Changing a quantization bit allocation in one-bit increments (ie, imposing a fixed one-bit quantization granularity or “integer granularity”) is relatively simple in conventional VQ, which can typically accommodate an arbitrary integer codebook vector length. Pulse coding works differently, however, in the sense that the size of the quantization domain is determined not by the codebook vector length, but by the maximum number of pulses that can be coded for a given vector length. input. When this maximum number of pulses changes to one, the codebook vector length can change by an integer greater than one (that is, so that the quantization granularity is variable). Consequently, changing a pulse encoding quantization bit allocation in the steps of a bit (that is, the imposition of integer granularity) can result in allocations that are not valid. The quantization granularity for a pulse encoding scheme tends to be higher for low bit rates and decreases for integer granularity as the bit rate increases. [0090] The length of the pulse coding index determines the maximum number of pulses in the corresponding pattern. As noted above, not all integer index lengths are valid, since increasing the length of a pulse coding index to one does not necessarily increase the number of pulses that can be represented by the corresponding patterns. Consequently, it may be desirable for a pulse encoding application of the T200 dynamic allocation task to include a task that translates the bit allocations produced by the T200 task (which are not necessarily valid in the pulse encoding scheme) into pulse allocations. Figure 8C shows a flow chart of an M110 implementation of the M100 method that includes such task T300, which can be implemented to verify that an allocation is of a valid index length in the pulse codebook and to reduce an invalid allocation to the length of higher valid index that is less than that of the invalid allocation. [0091] The possibility of using the M100 method is also considered for a case that uses both conventional VQ and pulse encoding VQ (for example, in which some of the set of vectors will be encoded using a scheme of Conventional VQ, and at least one of the vectors will be encoded using a pulse encoding scheme). [0092] Figure 10A shows a block diagram of a T290 implementation of the T280 task that includes the TA320, TA510 and TA610 implementations of the TA300, TA500 and TA600 tasks, respectively. In this example, the input vectors are arranged so that the last of the m sub-bands under allocation (in the zero-based indexing convention used in the pseudo-code, the sub-band with the m-1 index) must be coded using of a pulse coding scheme (for example, factorial pulse coding or combinatorial pulse coding), while the first (m-1) sub-bands are coded using conventional VQ. For sub-bands to be encoded using conventional VQ (for example, not by pulses), bit allocations are calculated according to the integer constraint as described above. For the subband to be encoded by pulses, the bit allocation is calculated according to an integer restriction on the maximum number of pulses to be encoded. In an example of applying such a scheme, a selected set of perceptually significant sub-bands is encoded using conventional VQ, and the corresponding residual (for example, a concatenation of the unselected samples, or the difference between the original frame and the selected coded subbands) is encoded using pulse encoding. It should be understood that, although task T280 is described with reference to the encoding of pulses of a vector, task T280 can also be implemented in the encoding of pulses of multiple vectors (for example, a plurality of sub-vectors of a residual, such as as shown in Figure 3). [0093] The TA320 task can be implemented to impose upper and / or lower limits on the initial bit allocations, as described above with reference to tasks TA300 and TA305. In this case, the subband to be encoded by pulses is excluded from the over- and / or sub-allocation test. The TA320 task can also be implemented to exclude this sub-band from the reallocation made after each correction. [0094] Task TA510 imposes an integer restriction on the Bm bit allocations for the conventional VQ subbands by truncating each Bm allocation to the largest integer not greater than Bm. Task TA510 also reduces the initial bit allocation Bm for the subband to be encoded by pulses, when appropriate, by applying an integer constraint to the maximum number of pulses to be encoded. Task TA510 can be configured to apply this pulse encoding integer restriction when calculating the maximum number of pulses that can be encoded with the initial bit allocation Bm, given the length of the subband vector to be encoded by pulses, and then replace the initial bit allocation Bm with the actual number of bits needed to encode this maximum number of pulses for such a vector length. [0095] Task TA510 also updates the Bav value according to an expression such as B - ∑m = Bm. The TA510 task can be configured to determine if Bav is at least as large as the number of bits needed to increase the maximum number of pulses in the pulse encoding quantization by one, and to adjust the allocation of pulse encoding bits and, therefore, Bav. The TA510 task can also be configured to store the truncated residue for each subband vector to be coded using conventional VQ in a corresponding element of an ΔB error matrix. [0096] The TA610 task distributes the remaining Bav bits. The TA610 task can be configured to distribute the remaining bits to the subband vectors to be encoded using conventional VQ that correspond to the highest values in the ΔB error matrix. Task TA610 can also be configured to use any remaining bits to increase bit allocation if possible for the subband to be encoded by pulses, for the case in which all conventional VQ bit allocations are at Bmax. [0097] The list of pseudo-codes in Appendix B describes a specific implementation of task T280 that includes a help function find_pulse_fpc. For a given vector length and bit allocation limit, this function returns the maximum number of pulses that can be encoded, the number of bits needed to encode that number of pulses, and the number of additional bits that would be required if the number maximum number of pulses was increased. [0098] Figure 10B shows a flowchart for a T295 implementation of the T290 dynamic allocation task that also includes an occurrence of the TA310 task. [0099] A sparse signal is often easy to code because some parameters (or coefficients) contain most of the signal's information. When encoding a signal with both sparse and non-sparse components, it may be desirable to assign more bits to encode non-sparse components than sparse components. It may be desirable to emphasize the non-sparse components of a signal to improve the coding performance of these components. Such an approach is focused on measuring energy distribution within the vector (for example, a sparity measure) to improve the coding performance for a specific signal class compared to others, which can help to ensure that non-sparse signals are well represented and to intensify the total coding performance. [00100] A signal that has more energy may have more bits to encode. A signal that is less sparse may also have more bits to encode than a signal that has the same energy, but is more sparse. A signal that is very sparse (for example, just a single pulse) is typically very easy to encode, while a signal that is well distributed (for example, very similar to noise) is typically more difficult to encode, even if the two signals have the same energy. It may be desirable to set up a dynamic allocation operation to account for the effect of the relative sparsity of subbands on their respective relative coding difficulties. For example, such a dynamic allocation operation can be configured to weight the allocation for a less sparse signal more heavily than the allocation for a signal with the same energy that is more sparse. [00101] In an example applied to model-oriented coding, the concentration of energy in a subband indicates that the model fits well into the input signal, so that good coding quality can be expected from an allocation of bits low. For the coding based on the harmonic model described here and as applied to a high band, such a case can arise with a musical signal from a single instrument. Such a sign can be referred to as "sparse". Alternatively, a flat distribution of energy may indicate that the model does not capture the signal structure as well, so it may be desirable to use a higher bit allocation to maintain the desired perceptual quality. Such a sign can be referred to as "non-sparse". [00102] Figure 11A shows a flowchart for a T225 implementation of the T220 dynamic allocation task that includes a TB100 sub-task and a TA215 implementation of the TA210 allocation calculation task. For each of the plurality of vectors, task TB100 calculates a corresponding value of a measure of energy distribution within the vector (that is, a sparity factor). The TB100 task can be configured to calculate the sparse factor based on the relationship between the total energy of the subband and the total energy of a subset of the coefficients of the subband. In one example, the subset is the largest (that is, maximum energy) LC coefficient of the subband (for example, as shown in Figure 11B). Examples of values for LC include 5, 10, 15 and 20 (five, seven, ten, fifteen or twenty percent of the total number of coefficients in the subband). In this case, it can be understood that the relationship between these values [as, for example, (energy of the sub-set) / (energy of the total sub-band)] indicates the degree to which the energy of the sub-band is concentrated or distributed. Similarly, task TB100 can be configured to calculate the sparity factor based on the number of the largest sub-band coefficients that is sufficient to achieve the sum of energy that is a specific part (for example, 5, 10, 12 , 15, 20, 25 or 30 percent) of the total subband energy. The TB100 task can include ordering the energies of the subband coefficients. [00103] Task TA215 calculates the bit allocations for the vectors based on the corresponding gain and sparse factors. The TA215 task can be implemented to divide the total available bit allocation among the subbands in proportion to the values of their corresponding sparsity factors, so that more bits are allocated to the less concentrated subband or subbands. In one example, task TA215 is configured to map sparse factors that are less than an sL limit value to one, to map sparse factors that are greater than an sH limit value to an R value that is less than one (for example , R = 0.7) and to linearly map sparity factors from sL to sH in the range 1 to R. In such a case, task TA215 can be implemented to calculate the allocation of Bm bits for each vector m as the specimens for each of the factors a and b includes 0.5. For the case in which the vectors m are unit norm vectors (for example, shape vectors), the Em energy of each vector in task TA210 is the corresponding gain factor. [00104] It is expressly noted that any of the occurrences of task TA210 described here can be implemented as an occurrence of task TA215 (for example, with a corresponding occurrence of the task of calculating sparse factors TB100). An encoder that performs such a dynamic allocation task can be configured to transmit an indication of the sparse and gain factors, so that the decoder can derive the bit allocation from these values. In another example, an implementation of task TA210 described here can be configured to calculate bit allocations based on information from an LPC operation (for example, in addition to or alternatively to the dimension and / or sparity of the vector). For example, such an implementation of the TA210 task can be configured to produce bit allocations according to a weighting factor that is proportional to the spectral slope (that is, the first reflection coefficient). In such a case, the allocations for vectors corresponding to the low frequency bands can be weighted more or less abundantly based on the spectral slope for the frame. [00105] Alternatively or additionally, a sparity factor described here can be used to select or otherwise calculate the value of a modulation factor for the corresponding subband. The modulation factor can then be used to modulate (for example, stagger) the subband coefficients. In a specific example, such a sparse-based modulation scheme is applied to high-band coding. [00106] In an open loop gain encoding case, it may be desirable to configure the decoder (for example, the gain quantizer) to multiply the open loop gain by a y factor, which is a function of the number of bits that was used to encode the shape (that is, the lengths of the indexes of the shape codebook vectors). When very few bits are used to quantize the shape, the shape quantizer is likely to produce a large error, so vectors S and S may not match very well, so it may be desirable in the decoder to reduce the gain to reflect this error. The correction factor y represents this error only in the medium sense: it depends only on the codebook (specifically, on the number of bits in the codebooks) and not on any specific details of the input vector x. The codec can be configured so that the correction factor y is not transmitted, but instead is only read from a table by the decoder according to the number of bits that were used to quantize the vector S. [00107] This correction factor y indicates, based on the bit rate, the proximity in which the average vector S can be expected to approach the true form S. As the bit rate goes up, the average error will decrease and the value of the correction factor y will approach one and, as the bit rate drops a lot, the correlation between S and the vector S (for example, the internal product of the vector ST and S) will decrease, and the value of the correction factor y will also decrease. While it may be desirable to achieve the same effect as in open loop gain (for example, in the adaptive sense of input by real input), in the case of open loop correction is typically available only in the middle sense. [00108] Alternatively, a classification of an interpolation between the open-loop and closed-loop gain methods can be performed. Such an approach increases the open-loop gain expression with a dynamic correction factor that is dependent on the quality of the quantization in a specific way, and not just on the average quantization error based on length. Such a factor can be calculated based on the scalar product of the quantized and non-quantized forms. It may be desirable to encode the value of this correction factor in a very coarse way (such as, for example, an index in a codebook with four or eight entries), so that it can be transmitted in very few bits. [00109] Figure 12A shows a block diagram of a device for allocating MF100 bits according to a general configuration. The MF100 apparatus includes FA100 mechanisms for calculating, for each of a plurality of vectors, a corresponding gain factor of a plurality of gain factors (for example, as described herein with reference to the TA100 task implementations). The MF100 apparatus also includes FA210 mechanisms for calculating, for each of the plurality of vectors, a corresponding bit allocation that is based on the gain factor (for example, as described here with reference to the TA210 task implementations). The MF100 apparatus also includes FA300 mechanisms for determining, for at least one of the plurality of vectors, that the corresponding bit allocation is not greater than a minimum allocation value (for example, as described herein with reference to the TA300 task implementations). The MF100 apparatus also includes FB300 mechanisms for changing the corresponding bit allocation, in response to the determination, for each of at least one vector (for example, as described here with reference to the TA300 task implementations). [00110] Figure 12B shows a block diagram of a device for allocating bits A100 according to a general configuration that includes a gain factor calculator 100, a bit allocation calculator 210, a comparator 300, and a module allocation adjustment 300B. The gain factor calculator 100 is configured to calculate, for each of a plurality of vectors, a corresponding gain factor of a plurality of gain factors (for example, as described herein with reference to the TA100 task implementations). The bit allocation calculator 210 is configured to calculate, for each of the plurality of vectors, a corresponding bit allocation that is based on the gain factor (for example, as described here with reference to the TA210 task implementations). Comparator 300 is configured to determine, for at least one of the plurality of vectors, that the corresponding bit allocation is not greater than a minimum allocation value (for example, as described herein with reference to the TA300 task implementations). The allocation adjustment module 300B is configured to change the corresponding bit allocation, in response to the determination, for each of at least one vector (for example, as described here with reference to the TA300 task implementations). The A100 apparatus can also be implemented to include a frame divider configured to divide a frame into a plurality of sub-vectors (for example, as described herein with reference to the T100 task implementations). [00111] Figure 13A shows a block diagram of an E100 encoder according to a general configuration that includes an instance of the A100 device and a SE10 subband encoder. The subband encoder SE10 is configured to quantize the plurality of vectors (or a plurality of vectors based on them, such as a corresponding plurality of shape vectors) according to the corresponding allocations calculated by the apparatus A100. For example, the SE10 subband encoder can be configured to perform a conventional VQ encoding operation and / or the pulse encoding VQ operation described herein. Figure 13D shows a block diagram of a corresponding D100 decoder that includes an instance of the A100 device and an SD10 subband decoder, which is configured to de-quantify the plurality of vectors (or a plurality of vectors based on them, such as a correspondingly plurality of vectors) according to the corresponding allocations calculated by the A100 device. Figure 13B shows a block diagram of an E110 implementation of the E100 encoder that includes a BP10 bit packer configured to package the sub-bands encoded in frames that conform to one or more codecs described herein (for example, EVRC, AMR -WB). Figure 13E shows a block diagram of a corresponding D110 implementation of the D100 decoder that includes a corresponding U10 bit unpacker. Figure 13C shows a block diagram of an E120 implementation of the E110 encoder that includes occurrences A100a and A100b of device A100 and a residual encoder SE20. In this case, the subband coder SE10 is arranged to quantize a first plurality of vectors (or a plurality of vectors based on them, such as a corresponding plurality of shape vectors) according to the corresponding allocations calculated by the apparatus A100a, and the residual encoder SE20 is configured to quantize a second plurality of vectors (or a plurality of vectors based on them, such as a corresponding plurality of shape vectors) according to the corresponding allocations calculated by the apparatus A100b. Figure 13F shows a block diagram of a corresponding D120 implementation of the D100 decoder that includes a corresponding residual SD20 decoder, which is configured to de-quantize the second plurality of vectors (or a plurality of vectors based on these, such as a corresponding plurality of vectors form) according to the corresponding allocations calculated by the device A100b. [00112] Figures 14A-E show a range of applications for the E100 encoder described here. Figure 14A shows a block diagram of an audio processing path that includes an MM1 transform module (for example, a fast Fourier transform module or MDCT) and an E100 encoder instance that is arranged to receive frames from SA10 audio as samples in the transform domain (that is, as coefficients in the transform domain) and to produce corresponding SE10 coded frames. [00113] Figure 14B shows a block diagram of an implementation of the path of Figure 14A, in which the transform module MM1 is implemented using an MDCT transform module. The modified DCT module MM10 performs an MDCT operation on each audio frame to produce a set of coefficients in the MDCT domain. [00114] Figure 14C shows a block diagram of an implementation of the path of Figure 14A that includes an AM10 linear prediction coding analysis module. The linear prediction encoding (LPC) analysis module AM10 performs an LPC analysis operation on the classified frame in order to produce a set of LPC parameters (for example, filter coefficients) and a residual LPC signal. In one example, the LPC analysis module AM10 is configured to perform a tenth-order LPC analysis on a frame that has a bandwidth from zero to 4000 Hz. In another example, the LPC analysis module AM10 is configured to perform a Sixth-order LPC analysis in a frame that has a bandwidth frequency range of 3500 to 7000 Hz. The modified DCT module MM10 performs an MDCT operation on the residual LPC signal in order to produce a set of coefficients in the transform domain. A corresponding decoding path can be configured to decode SE10 encoded frames and to perform an inverse MDCT transform on the decoded frames to obtain an excitation signal for input into an LPC synthesis filter. [00115] Figure 14D shows a block diagram of a processing path that includes a signal classifier SC10. The SC10 signal classifier receives SA10 frames from an audio signal and classifies each frame into one of at least two categories. For example, the signal classifier SC10 can be configured to classify an SA10 frame as speech or music, so that if the frame is classified as music, then the rest of the path shown in Figure 14D is used to encode it, and, if the frame is classified as speech, then a different processing path is used to encode it. Such classification may include detection of signal activity, detection of noise, detection of periodicity, detection of sparse in the time domain and / or detection of sparse in the frequency domain. [00116] Figure 15A shows a block diagram of an MZ100 signal classification method that can be performed by the signal classifier SC10 (for example, in each of the SA10 audio frames). The MC100 method includes tasks TZ100, TZ200, TZ300, TZ400, TZ500 and TZ600. The TZ100 task quantizes the activity level in the signal. If the activity level is below a threshold, task TZ200 encodes the signal as silence (for example, using a linear prediction scheme excited by low bit rate noise (NELP) and / or a discontinuous transmission scheme (DTX )). If the activity level is high enough (for example, above the limit), the TZ300 task quantifies the periodicity of the signal. If task TZ300 determines that the signal is non-periodic, task TZ400 encodes the signal using an NELP scheme. If task TZ300 determines that the signal is periodic, task TZ500 quantizes the degree of sparsity of the signal in the domain of time and / or frequency. If task TZ500 determines that the signal is sparse in the time domain, task TZ600 encodes the signal using a code-excited linear prediction scheme (CELP), such as relaxed CELP (RCELP) or algebraic CELP (ACELP). If task TZ500 determines that the signal is sparse in the frequency domain, task TZ700 encodes the signal using a harmonic model (for example, passing the signal to the rest of the processing path in Figure 14D). [00117] As shown in Figure 14D, the processing path can include a PM10 perceptual debug module, which is configured to simplify the signal in the MDCT domain (for example, to reduce the number of coefficients in the transform domain to be encoded) when applying psycho-acoustic criteria, such as time masking, frequency masking and / or listening limit. The PM10 module can be implemented to compute the values of such criteria by applying a perceptual model to the original SA10 audio frames. In this example, the E100 encoder is arranged to encode the debugged frames to produce corresponding SE10 encoded frames. [00118] Figure 14E shows a block diagram of an implementation of both paths of Figures 14C and 14D, in which the encoder E100 is arranged to encode the residual LPC. [00119] Figure 15B shows a block diagram of a D10 communication device that includes an implementation of the A100 device. The D10 device includes a CS10 chip or chip set (for example, a mobile station modem (MSM) chip set) that incorporates the elements of the A100 device (or MF100) and possibly the D100 device (or DF100). The CS10 chip / chip set may include one or more processors, which can be configured to run the software and / or firmware portion of the A100 or MF100 device (for example, instructions). [00120] The CS10 chip / chip set includes a receiver, which is configured to receive a radio frequency (RF) communication signal and to decode and reproduce an audio signal encoded within the RF signal, and a transmitter, which is configured to transmit an RF communication signal that describes an encoded audio signal (for example, including codebook indices as produced by the A100 device) that is based on a signal produced by the MV10 microphone. Such a device can be configured to transmit and receive voice communication data wirelessly through one or more encoding and decoding schemes (also called “codecs”). Examples of such codecs include the Enhanced Variable Rate Codec, as described in document C.S0014- V, v1.0 of the 2nd Third Generation Partner Project (3GPP2), entitled “Enhanced Variable Rate Codec, Service Options. Speech 3, 68 and 70 for Digital Broadband Systems with Spectral Scattering ”, February 2007 (available online as www-dot-3gpp-dot-org); the Selectable Mode Vocoder speech codec, as described in document 3GPP2 C.S0030-0, v3.0, entitled “Selectable Mode Vocoder Service Option (SMV) for Broadband Communication Systems with Spectral Scattering”, January 2004 (available online as www-dot-3gpp-dot-org); the Adaptive Multi-Rate Speech Codec (AMR), as described in document ETS1 TS 126 092 V6.0.0 (European Telecommunication Standards Institute (ETSI), Sophia Antipolis Cedex, FR, December 2004); and the AMR Broadband speech codec, as described in document ETSI TS 126 192 V6.0.0 (ETSI, December 2004). For example, the CS10 chip or chip set can be configured to produce the encoded frames to conform to one or more codecs. [00121] The D10 device is configured to receive and transmit the RF communication signals through the C30 antenna. The D10 device may also include a diplexer and one or more power amplifiers on the way to the C30 antenna. The CS10 chip / chip set is also configured to receive user input via the C10 keyboard and to display information via the C20 monitor. In this example, the D10 device also includes one or more C40 antennas to support Global Positioning System (GPS) location services and / or short-range communications with an external device, such as a wireless headset (for example, Bluetooth ™). In another example, such a communication device is the Bluetooth ™ headset itself and is devoid of the C10 keyboard, the C20 monitor and the C30 antenna. [00122] The D10 communication device can be incorporated into several communication devices, including smart phones and laptop and tablet computers. Figure 16 shows the front, rear and side views of an H100 telephone device (for example, a smart phone), which has two MV10-1 and MV10-3 voice microphones arranged on the front surface, an MV10- 2 arranged on the rear surface, an ME10 error microphone located in the upper corner of the front surface, and an MR10 noise reference microphone located on the rear surface. An LS10 speaker is arranged in the upper center of the front surface near the ME10 error microphone, and two other LS20L, LS20R speakers are also featured (for example, for speaker phone applications). The maximum distance between the microphones of such a telephone device is typically about ten or twelve centimeters. [00123] In a multi-band encoder (for example, as shown in Figure 17), it may be desirable to perform low-band closed-loop gain GSVQ (for example, in a dependent or harmonic mode encoder, as here described in the order), and perform open loop gain GSVQ with gain-based dynamic bit allocation (for example, according to an implementation of task T210) between the forms in the high band. In this example, the low bandwidth frame is the residual of a low-band tenth order LPC analysis operation as produced by the analysis filter bank of an audio frequency input frame, and the high bandwidth frame is the residual. of a sixth-order high-band LPC analysis operation as produced by the analysis filter bank of the audio frequency input frame. Figure 18 shows a flowchart of a corresponding multi-band encoding method, in which the bit allocations for one or more of the indicated encodings (that is, UB-MDCT spectrum pulse encoding, harmonic subband GSVQ encoding and / or waste pulse coding) can be performed according to an implementation of task T210. [00124] As discussed above, a multi-band coding scheme can be configured so that each of the low band and the high band is coded using an independent coding mode or a dependent coding mode (alternatively, a harmonic). For the case where the low band is encoded using an independent coding mode (for example, GSVQ applied to a set of fixed sub-bands), a dynamic allocation as described above can be performed (for example, according to an implementation of task T210) to allocate a total bit allocation for the frame (which can be fixed or can vary from frame to frame) between the low band and the high band according to the corresponding gains. In such a case, another dynamic allocation as described above can be performed (for example, according to an implementation of task T210) to allocate the resulting low-band bit allocation between the low-band sub-bands and / or another dynamic allocation as described above can be done (for example, according to an implementation of task T210) to allocate the resulting high-band bit allocation between the high-band sub-bands. [00125] For the case in which the low band is encoded using a dependent encoding mode (alternatively, a harmonic), it may be desirable to first allocate bits of the total bit allocation to the frame (which can be fixed or can vary from frame for frame) for the sub-bands selected by the encoding mode. It may be desirable to use information from the LPC spectrum for the low band for this allocation. In one example, the LPC slope spectrum (for example, as indicated by the first coefficient of reflection) is used to determine the subband that has the highest LPC weight, and the maximum number of bits (for example, ten bits) is allocated to that sub-band (for example, for form quantization), with corresponding lower allocations being given to the sub-bands with lower LPC weights. A dynamic allocation as described above can then be performed (for example, according to an implementation of task T210) to allocate the remaining bits in the frame allocation between the low bandwidth and the high band. In such a case, another dynamic allocation as described above can be performed (for example, according to an implementation of task T210) to allocate the resulting high-band bit allocation between the high-band sub-bands. [00126] A selection of encoding mode as shown in Figure 18 can be extended to a case of multiple bands. In one example, each of the low band and the high band is encoded using an independent coding mode and a dependent coding mode (alternatively, an independent coding mode and a harmonic coding mode), so that combinations of four modes different factors are initially considered for the framework. Then, for each of the low band modes, the best corresponding high band mode is selected (for example, according to a comparison between the two options using a high band perceptual metric). Of the remaining two options (that is, the independent low band mode with the best corresponding high band mode, and the dependent (or harmonic) low band mode with the best corresponding high band mode), the selection between these options is made with reference to a perceptual metric that covers both the low band and the high band. In an example of such a multi-band case, the low-band independent mode uses GSVQ to encode a set of fixed sub-bands, and the high-band independent mode uses a pulse encoding scheme (for example, factorial pulse encoding ) to encode the high band signal. [00127] Figure 19 shows a block diagram of an E200 encoder according to a general configuration, which is configured to receive audio frames as samples in the MDCT domain (that is, as coefficients in the transform domain). The encoder E200 includes an IM10 independent mode encoder which is configured to encode a frame of a signal in the domain of MDCT SM10 according to an independent encoding mode to produce an SI10 independently encoded frame. The independent coding mode groups the coefficients in the transformed domain into subbands according to a predetermined (i.e., fixed) subband division and encodes the subbands using a vector quantization scheme (VQ). Examples of encoding schemes for the independent encoding mode include pulse encoding (for example, factorial pulse encoding and combinatorial pulse encoding). The E200 encoder can be configured according to the same principles to receive audio frames as samples in another transform domain, such as the Fast Fourier Transform (FFT) domain. [00128] The E200 encoder also includes an HM10 harmonic mode encoder (alternatively, a dependent mode encoder) that is configured to encode the signal frame in the MDCT SM10 domain according to a harmonic model in order to produce an encoded frame in SD10 harmonic mode. Either one or the other of the IM10 and HM10 encoders can be implemented to include a corresponding occurrence of the A100 apparatus, so that the corresponding coded frame is produced according to a dynamic allocation scheme described herein. The E200 encoder also includes an SEL10 encoding mode selector that is configured to use a distortion measure to select one between the frame encoded in independent mode SI10 and the frame encoded in harmonic mode SD10 as the encoded frame SE10. The E100 encoder shown in Figures 14A-14E can be realized as an implementation of the E200 encoder. The E200 encoder can also be used to encode a low-band LPC residual (for example, 0-4 kHz) in the MDCT domain and / or to encode a high-band LPC residual (for example, 3.5 - 7 kHz) in the MDCT domain in a multi-band codec, as shown in Figure 17. [00129] The methods and apparatus described here can be applied generally in any transceiver and / or audio detection application, especially in mobile or otherwise portable instances of such applications. For example, the range of configurations described here includes communication devices that reside in a wireless telephony communication system configured to use a code division multiple access (CDMA) air interface. However, those skilled in the art would understand that a method and apparatus that have the characteristics described herein can reside in any of the various communication systems that use a wide range of technologies known to those skilled in the art, such as systems that use Voice over IP (VoIP) through wired and / or wireless transmission channels (for example, CDMA, TDMA, FDMA and / or TD-SCDMA). [00130] It is expressly considered and described here that communication devices described herein can be adapted for use in networks that are packet-switched (for example, wired and / or wireless networks arranged to carry audio transmissions according to protocols such as VoIP) and / or circuit switched. It is also expressly considered and described here that communication devices described herein can be adapted for use in narrowband encoding systems (for example, systems encoding an audio frequency range of about four or five kilohertz) and / or for use in broadband encoding systems (for example, systems that encode audio frequencies higher than five kilohertz), including full bandwidth encoding systems and split bandwidth encoding systems. [00131] The presentation of the described configurations is provided to allow anyone skilled in the art to make or use the methods and other structures described here. Flowcharts, block diagrams, and other structures shown and described here are examples only, and other variants of these structures are also within the scope of the description. Various modifications to these configurations are possible, and the generic principles presented here can be applied to other configurations as well. Thus, the present description is not intended to be limited to the configurations shown above, but should instead be given the broadest scope compatible with the new principles and features described here in any section, including in the appended as filed claims, which form a part from the original description. [00132] Those skilled in the art will understand that information and signals can be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, and symbols referred to throughout the above description can be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles or any combination thereof. [00133] Important design requirements for implementing a configuration as described here may include minimizing processing delay and / or computational complexity (typically measured in millions of instructions per second or MIPS), especially in computationally intensive applications, such as, for example, the reproduction of compressed audio or audiovisual information (for example, a file or stream encoded according to a compression format, such as one of the examples identified here) or applications for broadband communications (for example, communications of voice at sampling rates higher than eight kilohertz, such as 12, 16, 44.1, 48 or 192 kHz). [00134] A device as described here (for example, the device A100 and MF100) can be implemented in any combination of hardware with software and / or firmware, which is considered suitable for the intended application. For example, the elements of such a device can be manufactured as electronic and / or optical devices that reside, for example, on the same chip or between two or more chips in a chip set. An example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements can be implemented as one or more of such arrangements. Any two or more, or even all, of these elements can be implemented within the same arrangement or arrangements. Such an arrangement or arrangements may be implemented within one or more chips (for example, within a chip set that includes two or more chips). [00135] One or more elements of the various implementations of the apparatus described here (for example, the apparatus A100 and MF100) may be implemented in whole or in part as one or more sets of instructions arranged to be executed in one or more fixed arrangements or programmable logic elements, such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs (field programmable port arrangements), ASSPs (application specific standard products) and ASICs (application specific integrated circuits). Any of the various elements of an appliance implementation as described herein can also be incorporated as one or more computers (for example, machines including one or more arrangements programmed to execute one or more sets or sequences of instructions, also called "processors" ) and any two or more, or even all, of these elements can be implemented within the same computer or computers. [00136] A processor or other processing mechanism as described herein can be manufactured as one or more electronic and / or optical devices that reside, for example, on the same chip or between two or more chips in a chip set. An example of such a device is a fixed or programmable arrangement of logic elements, such as transistors or logic gates, and any of these elements can be implemented as one or more of such arrangements. Such an arrangement or arrangements may be implemented within one or more chips (for example, within a chip set including two or more chips). Examples of such arrangements include fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, DSPs, FPGAs, ASSPs and ASICs. A processor or other processing mechanism as described herein can also be incorporated as one or more computers (for example, machines including one or more arrangements programmed to execute one or more sets or sequences of instructions) or other processors. It is possible that a processor as described here is used to perform tasks or execute other sets of instructions that are not directly related to a procedure of an implementation of the M100 or MD100 method, such as a task referring to another operation of a device or system in the which the processor is built in (for example, an audio detection device). It is also possible that part of a method as described herein is performed by a processor of the audio detection device and that another part of the method is performed under the control of one or more other processors. [00137] Those skilled in the art will understand that the various modules, logic blocks, illustrative circuits, and tests and other operations described in connection with the configurations described here can be implemented as electronic hardware, computer software, or combinations of both. Such modules, logic blocks, circuits, and operations can be implemented or performed with a general purpose processor, digital signal processor (DSP), ASIC or ASSP, FPGA or other programmable logic device, logic gate or transistor logic. , discrete hardware components, or any combination of these designed to produce the configuration as described herein. For example, such a configuration can be implemented, at least in part, as a hardwired circuit, as a circuit configuration manufactured on an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or onto a data storage medium, such as machine-readable code, such code being instructions executable by an arrangement of logic elements, such as a general-purpose processor or other digital signal processing unit . A general purpose processor can be a microprocessor, but alternatively, the processor can be any processor, controller, microcontroller or conventional state machine. A processor can also be implemented as a combination of computing devices, such as, for example, a combination of DSP and microprocessor, a plurality of microprocessors, one or more microprocessors in a set with a DSP core, or any other configuration. A software module can reside on a non-transitory storage medium, such as RAM (random access memory), ROM (read memory), non-volatile RAM (NVRAM), such as flash RAM, programmable and erasable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), registers, hard disk, removable disk, or CD-ROM; or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor so that the processor can read information from, and write information to, the storage medium. Alternatively, the storage medium can be integrated with the processor. The processor and storage medium can reside in an ASIC. The ASIC can reside on a user terminal. Alternatively, the processor and the storage medium can reside as discrete components in a user terminal. [00138] It is noted that the various methods described here (for example, implementations of the M100 method or other methods described with reference to the operation of the various devices described here) can be performed by an arrangement of logic elements such as a processor, and that the various elements of an apparatus described here can be implemented as modules designed for execution in such an arrangement. As used herein, the term “module” or “sub-module” can refer to any method, device, device, unit or computer-readable data storage medium that includes computer instructions (for example, logical expressions) in software, hardware or firmware. It should be understood that multiple modules or systems can be combined into one module or system, and a module or system can be separated into multiple modules or systems to perform the same functions. When implemented in software or other computer executable instructions, the elements of a process are essentially the segments of code to perform related tasks, such as, for example, with routines, programs, objects, components, data structures, and the like. The term "software" should be understood to include source code, assembly language code, machine code, binary code, firmware, macrocode, microcode, any one or more sets or sequences of instructions executable by an array of logic elements and any combination of such examples. The program or code segments can be stored in a processor-readable medium or transmitted by a computer data signal embedded in a carrier wave through a transmission medium or communication link. [00139] Implementations of the methods, schemes and techniques described herein can be tangibly incorporated (for example, in tangible computer-readable characteristics, of one or more computer-readable storage media as listed here) as one or more executable instruction sets by a machine including an array of logic elements (for example, a processor, microprocessor, microcontroller or other finite state machine). The term "computer-readable medium" can include any medium that can store or transfer information, including volatile, non-volatile, removable and non-removable storage media. [00140] Examples of a computer-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy disk or other magnetic storage, a CD-ROM / DVD or other optical storage, a hard drive or any other medium that can be used to store the desired information, fiber optic medium, a radio frequency link, or any other medium that can be used to carry the desired information and can be accessed . The computer data signal can include any signal that can propagate through a transmission medium such as, for example, electronic network channels, optical fibers, air, electromagnetic links, RF, etc. Code segments can be downloaded over computer networks, such as the Internet or an intranet. In any event, the scope of the present description should not be interpreted as limited by such modalities. [00141] Each of the tasks of the methods described here can be incorporated directly in hardware, in a software module executed by a processor, or in a combination of the two. In a typical application of an implementation of a method described here, an arrangement of logic elements (for example, logic gates) is configured to perform one, more, one, or even all, of the various tasks of the method. One or more (possibly all) of the tasks can also be implemented as code (for example, one or more sets of instructions), incorporated into a computer program product (such as, for example, one or more data storage media such as such as disks, flash memory cards or other non-volatile memory, semiconductor memory chips, etc.) that is readable and / or executable by a machine (for example, a computer) including an array of logic elements (for example, processor, microprocessor, microcontroller or other finite state machine). The tasks of implementing a method described here can also be performed by more than one arrangement or machine. In these or other implementations, tasks can be performed within a device for wireless communications, such as a cell phone or other device having such a communication capability. Such a device can be configured to communicate with circuit-switched and / or packet-switched networks (for example, using one or more protocols, such as VoIP). For example, such a device may include an RF circuitry configured to receive and / or transmit encrypted frames. [00142] It is expressly described that the various methods described here can be performed by a portable communication device, such as a telephone device, a headset, or a portable digital assistant (PDA), and that the various devices described here can be included within such a device. A typical real-time (for example, online) application is a telephone conversation conducted using such a mobile device. [00143] In one or more exemplary modalities, the operations described here can be implemented in hardware, software, firmware, or any combination of these. If implemented in software, such operations can be stored in or transmitted via a computer-readable medium such as one or more instructions or code. The term "computer-readable medium" includes computer storage media and communication media (for example, transmission). By way of example, and not by way of limitation, computer-readable storage media may comprise an array of storage elements, such as semiconductor memory (which may include, without limitation, dynamic or static RAM, ROM, EEPROM and / or Flash RAM), or ferroelectric, magneto-resistive, ovonic, polymeric or phase-altered memory; CD-ROM or other optical disk storage; and / or magnetic disk storage devices or other magnetic storage devices. Such storage media can store information in the form of instructions or data structures that can be accessed by a computer. The means of communication can comprise any means that can be used to carry a program code in the form of instructions or data structures and that can be accessed by a computer, including any means that facilitates the transfer of a computer program from one place to another. In addition, any connection is appropriately referred to as a computer-readable medium. For example, if the software is transmitted from a network site, server or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL) or wireless technologies such as infrared, radio and / or microwave, then coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and / or microwave are included in the media definition. The term disc (disk) and disc (disc), as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray ™ disc (Blu-Ray Disc Association, Universal City, CA), where discs (DISKS) usually reproduce data magnetically, while discs (DISCS) reproduce data optically with lasers. Combinations of these should also be included within the scope of computer-readable media. [00144] An acoustic signal processing device as described here can be incorporated into an electronic device that accepts speech input to control certain operations, or on the other hand it can benefit from the separation of desired noises from background noise, such as, for example, communication devices. Many applications can benefit from the improvement or separation of a desired clear sound from background noise that originates from multiple directions. Such applications may include human-machine interfaces on electronic or computing devices that incorporate capabilities such as speech recognition and detection, speech enhancement and separation, voice-activated control, and the like. It may be desirable to implement such an acoustic signal processing apparatus as being suitable in devices that only have limited processing capabilities. [00145] The elements of the various implementations of the modules, elements, and devices described herein can be manufactured as electronic and / or optical devices that reside, for example, on the same chip or between two or more chips in a chip set. An example of such a device is a fixed or programmable arrangement of logic elements, such as transistors or gates. One or more elements of the various implementations of the apparatus described here can also be implemented in whole or in part as one or more sets of instructions arranged for execution in one or more fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, cores IP, digital signal processors, FPGAs, ASSPs, and ASICs. [00146] It is possible that one or more elements of an implementation of a device as described here are used to perform tasks or execute other sets of instructions that are not directly related to an operation of the device, such as, for example, a task related to another operation of a device or system in which the device is embedded. It is also possible that one or more elements of an implementation of such a device have a common structure (such as, for example, a processor used to execute pieces of code that correspond to different elements at different times, a set of instructions executed to perform tasks corresponding to different elements at different times, or an electronic and / or optical arrangement that perform different elements at different times) APPENDIX A
权利要求:
Claims (21) [0001] 1. Method for dynamic allocation of bits to encode audio signals, the method CHARACTERIZED by the fact that it comprises: calculating (TA100) a gain factor corresponding to a plurality of gain factors, for each one among a plurality of vectors; and calculate (TA210), by an electronic audio coding device, a corresponding bit allocation that is based on the gain factor, for each one of the plurality of vectors; determine (TA310) that a corresponding bit allocation is not greater than a minimum allocation value, for at least one of the plurality of vectors, where each corresponding minimum allocation value is calculated based on a corresponding vector length and with based on a value, where the value is the same for each of the at least one vector; and changing, by the electronic audio coding device, a corresponding bit allocation, for each one of at least one vector, in response to the determination; and encoding each vector of the plurality of vectors in a corresponding allocated number of bits. [0002] 2. Method for dynamic bit allocation, according to claim 1, CHARACTERIZED by the fact that a first minimum allocation value corresponding to a first vector among a plurality of vectors is different from a second minimum allocation value corresponding to a second vector among the plurality of vectors. [0003] 3. Method for dynamic bit allocation, according to claim 1, CHARACTERIZED by the fact that each corresponding minimum allocation value is calculated as a minimum of a corresponding vector length and the value. [0004] 4. Method for dynamic bit allocation, according to claim 1, CHARACTERIZED by the fact that each corresponding minimum allocation value is calculated according to a monotonically non-decreasing function of corresponding vector length. [0005] 5. Method for dynamic allocation of bits, according to claim 1, CHARACTERIZED by the fact that the method comprises, for each of the plurality of vectors, calculating a value of an energy distribution of the corresponding vector; and in which, for each of the plurality of vectors, a corresponding bit allocation is based on a corresponding value of an energy distribution of the corresponding vector. [0006] 6. Method for dynamic bit allocation, according to claim 1, CHARACTERIZED by the fact that the method comprises, for at least one of the plurality of vectors: determining that a corresponding bit allocation does not correspond to an index length valid code book; and reducing a corresponding bit allocation in response to the determination. [0007] 7. Method for dynamic bit allocation according to claim 1, CHARACTERIZED by the fact that for at least one of a plurality of vectors, a corresponding bit allocation is the index length of a pattern codebook that they each have n unit pulses, and the method comprises calculating a number of bits between a corresponding bit allocation and an index length of a pattern codebook that each has (n + 1) unit pulses. [0008] 8. Method for dynamic allocation of bits, according to claim 1, CHARACTERIZED by the fact that the method comprises calculating, from each of the plurality of vectors, a corresponding gain factor and a correspondingly shaped vector. [0009] 9. Method for dynamic bit allocation according to claim 1, CHARACTERIZED by the fact that the method comprises determining a length of each of the plurality of vectors, on which determining a length of each of the plurality of vectors is based at locations of a second plurality of vectors; and wherein a frame of an audio signal includes the plurality of vectors and the second plurality of vectors. [0010] 10. Method for dynamic allocation of bits, according to claim 1, CHARACTERIZED by the fact that the plurality of gain factors is calculated by de-quantizing a corresponding quantized gain vector. [0011] 11. Apparatus for dynamic allocation of bits to encode audio signals, CHARACTERIZED by the fact that it comprises: means to calculate (TA100) a gain factor corresponding to a plurality of gain factors, for each one among a plurality of vectors; and means for calculating (TA210), by an electronic audio coding device, a corresponding bit allocation that is based on the gain factor, for each one of the plurality of vectors; means for determining (TA310) that a corresponding bit allocation is not greater than a minimum allocation value, for at least one of the plurality of vectors, where each corresponding minimum allocation value is calculated based on a corresponding vector length and based on a value, where the value is the same for each of the at least one vector; and means for changing, by the electronic audio coding apparatus, a corresponding bit allocation, for each of at least one vector, in response to the determination; and means for encoding each vector of the plurality of vectors into a corresponding allocated number of bits. [0012] 12. Apparatus for dynamic bit allocation, according to claim 11, CHARACTERIZED by the fact that a first minimum allocation value corresponding to a first vector among a plurality of vectors is different from a second minimum allocation value corresponding to a second vector among the plurality of vectors. [0013] 13. Apparatus for dynamic bit allocation, according to claim 11, CHARACTERIZED by the fact that each corresponding minimum allocation value is calculated as a minimum of a corresponding vector length and the value. [0014] 14. Apparatus for dynamic bit allocation, according to claim 11, CHARACTERIZED by the fact that each corresponding minimum allocation value is calculated according to a monotonically non-decreasing function of corresponding vector length. [0015] 15. Apparatus for dynamic bit allocation, according to claim 11, CHARACTERIZED by the fact that the apparatus comprises means for calculating, for each of the plurality of vectors, a value of an energy distribution of the corresponding vector; and in which, for each of the plurality of vectors, a corresponding bit allocation is based on a corresponding value of an energy distribution of the corresponding vector. [0016] 16. Device for dynamic bit allocation according to claim 11, CHARACTERIZED by the fact that the device comprises means for determining, for at least one of the plurality of vectors, that a corresponding bit allocation does not correspond to a length of valid codebook index and to reduce a corresponding bit allocation in response to the determination. [0017] 17. Apparatus for dynamic bit allocation according to claim 11, CHARACTERIZED by the fact that for at least one of a plurality of vectors, a corresponding bit allocation is the index length of a codebook of patterns that have , each, n unit pulses, and the apparatus further comprises means for calculating a number of bits between a corresponding bit allocation and an index length of a pattern codebook that each has (n + 1) pulses unitary. [0018] 18. Device for dynamic bit allocation, according to claim 11, CHARACTERIZED by the fact that the device comprises means for calculating, from each of the plurality of vectors, a corresponding gain factor and a correspondingly shaped vector . [0019] 19. Apparatus for dynamic bit allocation according to claim 11, CHARACTERIZED by the fact that the apparatus comprises means for determining a length of each of the plurality of vectors, in which determining a length of each of the plurality of vectors it is based on locations of a second plurality of vectors; and wherein a frame of an audio signal includes the plurality of vectors and the second plurality of vectors. [0020] 20. Device for dynamic bit allocation, according to claim 11, CHARACTERIZED by the fact that the plurality of gain factors is calculated by de-quantizing a corresponding quantized gain vector. [0021] 21. Computer-readable storage medium, CHARACTERIZED by the fact that it has tangible attributes that cause the machine to read the attributes to execute the method as defined in any of claims 1 to 10.
类似技术:
公开号 | 公开日 | 专利标题 BR112013002166B1|2021-02-02|method and apparatus for dynamic bit allocation to encode audio signals, and computer readable medium CN103069482B|2015-12-16|For system, method and apparatus that noise injects CN109243478A|2019-01-18|System, method, equipment and the computer-readable media sharpened for the adaptive resonance peak in linear prediction decoding HUE035162T2|2018-05-02|Systems, methods, apparatus, and computer-readable media for decoding of harmonic signals
同族专利:
公开号 | 公开日 JP2013532851A|2013-08-19| US20120029925A1|2012-02-02| CN103038822B|2015-05-27| EP2599082A2|2013-06-05| EP3021322A1|2016-05-18| KR101445510B1|2014-09-26| US9236063B2|2016-01-12| EP3852104A1|2021-07-21| EP2599080A2|2013-06-05| EP2599080B1|2016-10-19| US8924222B2|2014-12-30| JP2013534328A|2013-09-02| CN103038821B|2014-12-24| US20120029924A1|2012-02-02| JP5694531B2|2015-04-01| JP2013537647A|2013-10-03| EP2599081A2|2013-06-05| EP2599082B1|2020-11-25| US8831933B2|2014-09-09| CN103038820A|2013-04-10| WO2012016110A3|2012-04-05| CN103052984B|2016-01-20| WO2012016126A3|2012-04-12| EP3021322B1|2017-10-04| WO2012016110A2|2012-02-02| KR20130069756A|2013-06-26| EP2599081B1|2020-12-23| WO2012016128A2|2012-02-02| TW201214416A|2012-04-01| JP5694532B2|2015-04-01| WO2012016122A2|2012-02-02| ES2611664T3|2017-05-09| US20120029926A1|2012-02-02| CN103038822A|2013-04-10| KR20130037241A|2013-04-15| WO2012016128A3|2012-04-05| WO2012016122A3|2012-04-12| JP5587501B2|2014-09-10| US20120029923A1|2012-02-02| KR20130036361A|2013-04-11| KR20130036364A|2013-04-11| WO2012016126A2|2012-02-02| BR112013002166A2|2016-05-31| KR101442997B1|2014-09-23| HUE032264T2|2017-09-28| CN103052984A|2013-04-17| CN103038821A|2013-04-10| JP2013539548A|2013-10-24| KR101445509B1|2014-09-26|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US3978287A|1974-12-11|1976-08-31|Nasa|Real time analysis of voiced sounds| US4516258A|1982-06-30|1985-05-07|At&T Bell Laboratories|Bit allocation generator for adaptive transform coder| JPS6333935A|1986-07-29|1988-02-13|Sharp Corp|Gain/shape vector quantizer| US4899384A|1986-08-25|1990-02-06|Ibm Corporation|Table controlled dynamic bit allocation in a variable rate sub-band speech coder| JPH01205200A|1988-02-12|1989-08-17|Nippon Telegr & Teleph Corp <Ntt>|Sound encoding system| US4964166A|1988-05-26|1990-10-16|Pacific Communication Science, Inc.|Adaptive transform coder having minimal bit allocation processing| US5388181A|1990-05-29|1995-02-07|Anderson; David J.|Digital audio compression system| US5630011A|1990-12-05|1997-05-13|Digital Voice Systems, Inc.|Quantization of harmonic amplitudes representing speech| US5222146A|1991-10-23|1993-06-22|International Business Machines Corporation|Speech recognition apparatus having a speech coder outputting acoustic prototype ranks| EP0551705A3|1992-01-15|1993-08-18|Ericsson Ge Mobile Communications Inc.|Method for subbandcoding using synthetic filler signals for non transmitted subbands| CA2088082C|1992-02-07|1999-01-19|John Hartung|Dynamic bit allocation for three-dimensional subband video coding| IT1257065B|1992-07-31|1996-01-05|Sip|LOW DELAY CODER FOR AUDIO SIGNALS, USING SYNTHESIS ANALYSIS TECHNIQUES.| KR100188912B1|1992-09-21|1999-06-01|윤종용|Bit reassigning method of subband coding| US5664057A|1993-07-07|1997-09-02|Picturetel Corporation|Fixed bit rate speech encoder/decoder| JP3228389B2|1994-04-01|2001-11-12|株式会社東芝|Gain shape vector quantizer| TW271524B|1994-08-05|1996-03-01|Qualcomm Inc| US5751905A|1995-03-15|1998-05-12|International Business Machines Corporation|Statistical acoustic processing method and apparatus for speech recognition using a toned phoneme system| SE506379C3|1995-03-22|1998-01-19|Ericsson Telefon Ab L M|Lpc speech encoder with combined excitation| US5692102A|1995-10-26|1997-11-25|Motorola, Inc.|Method device and system for an efficient noise injection process for low bitrate audio compression| US5692949A|1995-11-17|1997-12-02|Minnesota Mining And Manufacturing Company|Back-up pad for use with abrasive articles| US5956674A|1995-12-01|1999-09-21|Digital Theater Systems, Inc.|Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels| US5781888A|1996-01-16|1998-07-14|Lucent Technologies Inc.|Perceptual noise shaping in the time domain via LPC prediction in the frequency domain| JP3240908B2|1996-03-05|2001-12-25|日本電信電話株式会社|Voice conversion method| JPH09288498A|1996-04-19|1997-11-04|Matsushita Electric Ind Co Ltd|Voice coding device| JP3707153B2|1996-09-24|2005-10-19|ソニー株式会社|Vector quantization method, speech coding method and apparatus| DE69710505T2|1996-11-07|2002-06-27|Matsushita Electric Ind Co Ltd|Method and apparatus for generating a vector quantization code book| FR2761512A1|1997-03-25|1998-10-02|Philips Electronics Nv|COMFORT NOISE GENERATION DEVICE AND SPEECH ENCODER INCLUDING SUCH A DEVICE| US6064954A|1997-04-03|2000-05-16|International Business Machines Corp.|Digital audio signal coding| DE69836081D1|1997-07-11|2006-11-16|Koninkl Philips Electronics Nv|TRANSMITTER WITH IMPROVED HARMONIOUS LANGUAGE CODIER| DE19730130C2|1997-07-14|2002-02-28|Fraunhofer Ges Forschung|Method for coding an audio signal| US6233550B1|1997-08-29|2001-05-15|The Regents Of The University Of California|Method and apparatus for hybrid coding of speech at 4kbps| US6301556B1|1998-03-04|2001-10-09|Telefonaktiebolaget L M. Ericsson |Reducing sparseness in coded speech signals| US5999897A|1997-11-14|1999-12-07|Comsat Corporation|Method and apparatus for pitch estimation using perception based analysis by synthesis| JPH11224099A|1998-02-06|1999-08-17|Sony Corp|Device and method for phase quantization| JP3802219B2|1998-02-18|2006-07-26|富士通株式会社|Speech encoding device| US6115689A|1998-05-27|2000-09-05|Microsoft Corporation|Scalable audio coder and decoder| JP3515903B2|1998-06-16|2004-04-05|松下電器産業株式会社|Dynamic bit allocation method and apparatus for audio coding| US6094629A|1998-07-13|2000-07-25|Lockheed Martin Corp.|Speech coding system and method including spectral quantizer| US7272556B1|1998-09-23|2007-09-18|Lucent Technologies Inc.|Scalable and embedded codec for speech and audio signals| US6766288B1|1998-10-29|2004-07-20|Paul Reed Smith Guitars|Fast find fundamental method| US6363338B1|1999-04-12|2002-03-26|Dolby Laboratories Licensing Corporation|Quantization in perceptual audio coders with compensation for synthesis filter noise spreading| US6246345B1|1999-04-16|2001-06-12|Dolby Laboratories Licensing Corporation|Using gain-adaptive quantization and non-uniform symbol lengths for improved audio coding| WO2000063886A1|1999-04-16|2000-10-26|Dolby Laboratories Licensing Corporation|Using gain-adaptive quantization and non-uniform symbol lengths for audio coding| JP4242516B2|1999-07-26|2009-03-25|パナソニック株式会社|Subband coding method| US6236960B1|1999-08-06|2001-05-22|Motorola, Inc.|Factorial packing method and apparatus for information coding| US6782360B1|1999-09-22|2004-08-24|Mindspeed Technologies, Inc.|Gain quantization for a CELP speech coder| US6952671B1|1999-10-04|2005-10-04|Xvd Corporation|Vector quantization with a non-structured codebook for audio compression| JP2001242896A|2000-02-29|2001-09-07|Matsushita Electric Ind Co Ltd|Speech coding/decoding apparatus and its method| JP3404350B2|2000-03-06|2003-05-06|パナソニックモバイルコミュニケーションズ株式会社|Speech coding parameter acquisition method, speech decoding method and apparatus| CA2359260C|2000-10-20|2004-07-20|Samsung Electronics Co., Ltd.|Coding apparatus and method for orientation interpolator node| GB2375028B|2001-04-24|2003-05-28|Motorola Inc|Processing speech signals| JP3636094B2|2001-05-07|2005-04-06|ソニー株式会社|Signal encoding apparatus and method, and signal decoding apparatus and method| CN1244904C|2001-05-08|2006-03-08|皇家菲利浦电子有限公司|Audio coding| JP3601473B2|2001-05-11|2004-12-15|ヤマハ株式会社|Digital audio compression circuit and decompression circuit| KR100347188B1|2001-08-08|2002-08-03|Amusetec|Method and apparatus for judging pitch according to frequency analysis| US7027982B2|2001-12-14|2006-04-11|Microsoft Corporation|Quality and rate control strategy for digital audio| US7240001B2|2001-12-14|2007-07-03|Microsoft Corporation|Quality improvement techniques in an audio encoder| US7310598B1|2002-04-12|2007-12-18|University Of Central Florida Research Foundation, Inc.|Energy based split vector quantizer employing signal representation in multiple transform domains| DE10217297A1|2002-04-18|2003-11-06|Fraunhofer Ges Forschung|Device and method for coding a discrete-time audio signal and device and method for decoding coded audio data| JP4296752B2|2002-05-07|2009-07-15|ソニー株式会社|Encoding method and apparatus, decoding method and apparatus, and program| TWI288915B|2002-06-17|2007-10-21|Dolby Lab Licensing Corp|Improved audio coding system using characteristics of a decoded signal to adapt synthesized spectral components| US7447631B2|2002-06-17|2008-11-04|Dolby Laboratories Licensing Corporation|Audio coding system using spectral hole filling| CN100492492C|2002-09-19|2009-05-27|松下电器产业株式会社|Audio decoding apparatus and method| JP4657570B2|2002-11-13|2011-03-23|ソニー株式会社|Music information encoding apparatus and method, music information decoding apparatus and method, program, and recording medium| FR2849727B1|2003-01-08|2005-03-18|France Telecom|METHOD FOR AUDIO CODING AND DECODING AT VARIABLE FLOW| JP4191503B2|2003-02-13|2008-12-03|日本電信電話株式会社|Speech musical sound signal encoding method, decoding method, encoding device, decoding device, encoding program, and decoding program| US7996234B2|2003-08-26|2011-08-09|Akikaze Technologies, Llc|Method and apparatus for adaptive variable bit rate audio encoding| US7613607B2|2003-12-18|2009-11-03|Nokia Corporation|Audio enhancement in coded domain| CA2457988A1|2004-02-18|2005-08-18|Voiceage Corporation|Methods and devices for audio compression based on acelp/tcx coding and multi-rate lattice vector quantization| US20070299658A1|2004-07-13|2007-12-27|Matsushita Electric Industrial Co., Ltd.|Pitch Frequency Estimation Device, and Pich Frequency Estimation Method| US20060015329A1|2004-07-19|2006-01-19|Chu Wai C|Apparatus and method for audio coding| EP2752849B1|2004-11-05|2020-06-03|Panasonic Intellectual Property Management Co., Ltd.|Encoder and encoding method| JP4599558B2|2005-04-22|2010-12-15|国立大学法人九州工業大学|Pitch period equalizing apparatus, pitch period equalizing method, speech encoding apparatus, speech decoding apparatus, and speech encoding method| US7630882B2|2005-07-15|2009-12-08|Microsoft Corporation|Frequency segmentation to obtain bands for efficient coding of digital media| KR100958144B1|2005-11-04|2010-05-18|노키아 코포레이션|Audio Compression| CN101030378A|2006-03-03|2007-09-05|北京工业大学|Method for building up gain code book| KR100770839B1|2006-04-04|2007-10-26|삼성전자주식회사|Method and apparatus for estimating harmonic information, spectrum information and degree of voicing information of audio signal| US8712766B2|2006-05-16|2014-04-29|Motorola Mobility Llc|Method and system for coding an information signal using closed loop adaptive bit allocation| US7987089B2|2006-07-31|2011-07-26|Qualcomm Incorporated|Systems and methods for modifying a zero pad region of a windowed frame of an audio signal| US8374857B2|2006-08-08|2013-02-12|Stmicroelectronics Asia Pacific Pte, Ltd.|Estimating rate controlling parameters in perceptual audio encoders| US20080059201A1|2006-09-03|2008-03-06|Chih-Hsiang Hsiao|Method and Related Device for Improving the Processing of MP3 Decoding and Encoding| JP4396683B2|2006-10-02|2010-01-13|カシオ計算機株式会社|Speech coding apparatus, speech coding method, and program| EP2092517B1|2006-10-10|2012-07-18|QUALCOMM Incorporated|Method and apparatus for encoding and decoding audio signals| US20080097757A1|2006-10-24|2008-04-24|Nokia Corporation|Audio coding| KR100862662B1|2006-11-28|2008-10-10|삼성전자주식회사|Method and Apparatus of Frame Error Concealment, Method and Apparatus of Decoding Audio using it| EP2101318B1|2006-12-13|2014-06-04|Panasonic Corporation|Encoding device, decoding device and corresponding methods| JP5339919B2|2006-12-15|2013-11-13|パナソニック株式会社|Encoding device, decoding device and methods thereof| KR101299155B1|2006-12-29|2013-08-22|삼성전자주식회사|Audio encoding and decoding apparatus and method thereof| FR2912249A1|2007-02-02|2008-08-08|France Telecom|Time domain aliasing cancellation type transform coding method for e.g. audio signal of speech, involves determining frequency masking threshold to apply to sub band, and normalizing threshold to permit spectral continuity between sub bands| DE602007004943D1|2007-03-23|2010-04-08|Honda Res Inst Europe Gmbh|Pitch extraction with inhibition of the harmonics and subharmonics of the fundamental frequency| US9653088B2|2007-06-13|2017-05-16|Qualcomm Incorporated|Systems, methods, and apparatus for signal encoding using pitch-regularizing and non-pitch-regularizing coding| US8005023B2|2007-06-14|2011-08-23|Microsoft Corporation|Client-side echo cancellation for multi-party audio conferencing| US7761290B2|2007-06-15|2010-07-20|Microsoft Corporation|Flexible frequency and time partitioning in perceptual transform coding of audio| US7774205B2|2007-06-15|2010-08-10|Microsoft Corporation|Coding of sparse digital media spectral data| US8111176B2|2007-06-21|2012-02-07|Koninklijke Philips Electronics N.V.|Method for encoding vectors| US7885819B2|2007-06-29|2011-02-08|Microsoft Corporation|Bitstream syntax for multi-process audio decoding| DK3401907T3|2007-08-27|2020-03-02|Ericsson Telefon Ab L M|Method and apparatus for perceptual spectral decoding of an audio signal comprising filling in spectral holes| WO2009033288A1|2007-09-11|2009-03-19|Voiceage Corporation|Method and device for fast algebraic codebook search in speech and audio coding| WO2009048239A2|2007-10-12|2009-04-16|Electronics And Telecommunications Research Institute|Encoding and decoding method using variable subband analysis and apparatus thereof| US8527265B2|2007-10-22|2013-09-03|Qualcomm Incorporated|Low-complexity encoding/decoding of quantized MDCT spectrum in scalable speech and audio codecs| US8139777B2|2007-10-31|2012-03-20|Qnx Software Systems Co.|System for comfort noise injection| CN101465122A|2007-12-20|2009-06-24|株式会社东芝|Method and system for detecting phonetic frequency spectrum wave crest and phonetic identification| US20090319261A1|2008-06-20|2009-12-24|Qualcomm Incorporated|Coding of transitional speech frames for low-bit-rate applications| PL2311033T3|2008-07-11|2012-05-31|Fraunhofer Ges Forschung|Providing a time warp activation signal and encoding an audio signal therewith| KR101706009B1|2008-07-11|2017-02-22|프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베.|Audio encoder, audio decoder, method for encoding and decoding an audio signal. audio stream and computer program| US8300616B2|2008-08-26|2012-10-30|Futurewei Technologies, Inc.|System and method for wireless communications| EP2182513B1|2008-11-04|2013-03-20|Lg Electronics Inc.|An apparatus for processing an audio signal and method thereof| SG172976A1|2009-01-16|2011-08-29|Dolby Int Ab|Cross product enhanced harmonic transposition| RU2519027C2|2009-02-13|2014-06-10|Панасоник Корпорэйшн|Vector quantiser, vector inverse quantiser and methods therefor| FR2947945A1|2009-07-07|2011-01-14|France Telecom|BIT ALLOCATION IN ENCODING / DECODING ENHANCEMENT OF HIERARCHICAL CODING / DECODING OF AUDIONUMERIC SIGNALS| US9117458B2|2009-11-12|2015-08-25|Lg Electronics Inc.|Apparatus for processing an audio signal and method thereof| MX2012010469A|2010-03-10|2012-12-10|Dolby Int Ab|Audio signal decoder, audio signal encoder, methods and computer program using a sampling rate dependent time-warp contour encoding.| WO2011141772A1|2010-05-12|2011-11-17|Nokia Corporation|Method and apparatus for processing an audio signal based on an estimated loudness| US8924222B2|2010-07-30|2014-12-30|Qualcomm Incorporated|Systems, methods, apparatus, and computer-readable media for coding of harmonic signals| US9208792B2|2010-08-17|2015-12-08|Qualcomm Incorporated|Systems, methods, apparatus, and computer-readable media for noise injection|KR101295729B1|2005-07-22|2013-08-12|프랑스 텔레콤|Method for switching rateand bandwidthscalable audio decoding rate| ES2559981T3|2010-07-05|2016-02-17|Nippon Telegraph And Telephone Corporation|Encoding method, decoding method, device, program and recording medium| US8924222B2|2010-07-30|2014-12-30|Qualcomm Incorporated|Systems, methods, apparatus, and computer-readable media for coding of harmonic signals| US9208792B2|2010-08-17|2015-12-08|Qualcomm Incorporated|Systems, methods, apparatus, and computer-readable media for noise injection| WO2012037515A1|2010-09-17|2012-03-22|Xiph. Org.|Methods and systems for adaptive time-frequency resolution in digital data coding| WO2012102149A1|2011-01-25|2012-08-02|日本電信電話株式会社|Encoding method, encoding device, periodic feature amount determination method, periodic feature amount determination device, program and recording medium| US9015042B2|2011-03-07|2015-04-21|Xiph.org Foundation|Methods and systems for avoiding partial collapse in multi-block audio coding| US8838442B2|2011-03-07|2014-09-16|Xiph.org Foundation|Method and system for two-step spreading for tonal artifact avoidance in audio coding| WO2012122299A1|2011-03-07|2012-09-13|Xiph. Org.|Bit allocation and partitioning in gain-shape vector quantization for audio coding| EP3321931B1|2011-10-28|2019-12-04|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Encoding apparatus and encoding method| RU2505921C2|2012-02-02|2014-01-27|Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд."|Method and apparatus for encoding and decoding audio signals | WO2013147666A1|2012-03-29|2013-10-03|Telefonaktiebolaget L M Ericsson |Transform encoding/decoding of harmonic audio signals| DE202013005408U1|2012-06-25|2013-10-11|Lg Electronics Inc.|Microphone mounting arrangement of a mobile terminal| CN103516440B|2012-06-29|2015-07-08|华为技术有限公司|Audio signal processing method and encoding device| EP2685448B1|2012-07-12|2018-09-05|Harman Becker Automotive Systems GmbH|Engine sound synthesis| KR101714278B1|2012-07-12|2017-03-08|노키아 테크놀로지스 오와이|Vector quantization| US8885752B2|2012-07-27|2014-11-11|Intel Corporation|Method and apparatus for feedback in 3D MIMO wireless systems| US9129600B2|2012-09-26|2015-09-08|Google Technology Holdings LLC|Method and apparatus for encoding an audio signal| KR102215991B1|2012-11-05|2021-02-16|파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카|Speech audio encoding device, speech audio decoding device, speech audio encoding method, and speech audio decoding method| MX341885B|2012-12-13|2016-09-07|Panasonic Ip Corp America|Voice audio encoding device, voice audio decoding device, voice audio encoding method, and voice audio decoding method.| US9577618B2|2012-12-20|2017-02-21|Advanced Micro Devices, Inc.|Reducing power needed to send signals over wires| BR112015016275B1|2013-01-08|2021-02-02|Dolby International Ab|method for estimating a first sample of a first subband signal in a first subband of an audio signal, method for encoding an audio signal, method for decoding an encoded audio signal, system, audio encoder and decoder audio| SG11201505893TA|2013-01-29|2015-08-28|Fraunhofer Ges Zur Förderung Der Angewandten Forschung E V|Noise filling concept| CN111477245A|2013-06-11|2020-07-31|弗朗霍弗应用研究促进协会|Speech signal decoding device and speech signal encoding device| CN107316647B|2013-07-04|2021-02-09|超清编解码有限公司|Vector quantization method and device for frequency domain envelope| EP2830059A1|2013-07-22|2015-01-28|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Noise filling energy adjustment| CN104347082B|2013-07-24|2017-10-24|富士通株式会社|String ripple frame detection method and equipment and audio coding method and equipment| US9224402B2|2013-09-30|2015-12-29|International Business Machines Corporation|Wideband speech parameterization for high quality synthesis, transformation and quantization| US8879858B1|2013-10-01|2014-11-04|Gopro, Inc.|Multi-channel bit packing engine| WO2015049820A1|2013-10-04|2015-04-09|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ|Sound signal encoding device, sound signal decoding device, terminal device, base station device, sound signal encoding method and decoding method| WO2015057135A1|2013-10-18|2015-04-23|Telefonaktiebolaget L M Ericsson |Coding and decoding of spectral peak positions| CN105659320B|2013-10-21|2019-07-12|杜比国际公司|Audio coder and decoder| CN110649925A|2013-11-12|2020-01-03|瑞典爱立信有限公司|Partitioned gain shape vector coding| US20150149157A1|2013-11-22|2015-05-28|Qualcomm Incorporated|Frequency domain gain shape estimation| ES2741506T3|2014-03-14|2020-02-11|Ericsson Telefon Ab L M|Audio coding method and apparatus| CN104934032B|2014-03-17|2019-04-05|华为技术有限公司|The method and apparatus that voice signal is handled according to frequency domain energy| US9542955B2|2014-03-31|2017-01-10|Qualcomm Incorporated|High-band signal coding using multiple sub-bands| EP3723086A1|2014-07-25|2020-10-14|FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V.|Audio signal coding apparatus, audio signal decoding apparatus, audio signal coding method, and audio signal decoding method| US9672838B2|2014-08-15|2017-06-06|Google Technology Holdings LLC|Method for coding pulse vectors using statistical properties| US9336788B2|2014-08-15|2016-05-10|Google Technology Holdings LLC|Method for coding pulse vectors using statistical properties| US9620136B2|2014-08-15|2017-04-11|Google Technology Holdings LLC|Method for coding pulse vectors using statistical properties| WO2016064730A1|2014-10-20|2016-04-28|Audimax, Llc|Systems, methods, and devices for intelligent speech recognition and processing| US20160232741A1|2015-02-05|2016-08-11|Igt Global Solutions Corporation|Lottery Ticket Vending Device, System and Method| WO2016142002A1|2015-03-09|2016-09-15|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Audio encoder, audio decoder, method for encoding an audio signal and method for decoding an encoded audio signal| DE102015104864A1|2015-03-30|2016-10-06|Thyssenkrupp Ag|Bearing element for a stabilizer of a vehicle| KR20180026528A|2015-07-06|2018-03-12|노키아 테크놀로지스 오와이|A bit error detector for an audio signal decoder| EP3171362B1|2015-11-19|2019-08-28|Harman Becker Automotive Systems GmbH|Bass enhancement and separation of an audio signal into a harmonic and transient signal component| US10210874B2|2017-02-03|2019-02-19|Qualcomm Incorporated|Multi channel coding| US10825467B2|2017-04-21|2020-11-03|Qualcomm Incorporated|Non-harmonic speech detection and bandwidth extension in a multi-source environment| CN108153189B|2017-12-20|2020-07-10|中国航空工业集团公司洛阳电光设备研究所|Power supply control circuit and method for civil aircraft display controller| WO2019165642A1|2018-03-02|2019-09-06|Intel Corporation|Adaptive bitrate coding for spatial audio streaming| CN110704024B|2019-09-28|2022-03-08|中昊芯英科技有限公司|Matrix processing device, method and processing equipment|
法律状态:
2018-12-26| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-09-10| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-11-17| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-02-02| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 29/07/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US36966210P| true| 2010-07-30|2010-07-30| US61/369,662|2010-07-30| US36970510P| true| 2010-07-31|2010-07-31| US36975110P| true| 2010-08-01|2010-08-01| US61/369,751|2010-08-01| US37456510P| true| 2010-08-17|2010-08-17| US61/374,565|2010-08-17| US38423710P| true| 2010-09-17|2010-09-17| US61/384,237|2010-09-17| US201161470438P| true| 2011-03-31|2011-03-31| US13/193,529|2011-07-28| US13/193,529|US9236063B2|2010-07-30|2011-07-28|Systems, methods, apparatus, and computer-readable media for dynamic bit allocation| PCT/US2011/045862|WO2012016126A2|2010-07-30|2011-07-29|Systems, methods, apparatus, and computer-readable media for dynamic bit allocation| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|